title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BitDelta: Your Fine-Tune May Only Be Worth One Bit | Accept (poster) | Summary: This paper introduces BitDelta which quantizes the aggregated weight updates (the authors call it “delta”) to 1-bit after full fine-tuning. The paper claims that the approach has two applications: 1. First, it shows that the delta is highly redundant and 2. It is useful in the multi-client-single-server applications where the high precision model will be saved on the server and each client only store it’s own 1-bit delta. They show that in this scenario, the generation takes up to 10x memory reduction (and similar in latency improvement).
Strengths: 1. The paper studies an important problem for fine-tuning LLMs with a new approach where they quantizes the delta to 1-bit,
2. The experiments are done on the most important models like LLaMa-2 and Mistral,
3. The paper provides a kernel for INT1xFP16 matrix multiplication.
Weaknesses: 1, The author claim that BitDelta shows the potential redundancy of information added during fine-tuning. However, this is not a new finding and almost all PEFT approaches (for example LoRA) are based on this fact.
2. It seems that the paper completely missed the full fine-tuning costs and just measured the memory/latency of the serving step. However, I would suggest to have “apple to apple” comparisons and compare the fine-tuning+serving cost (including memory and runtime) of “full-fine tuning+1bit optimization+serving” against “fine-tuning+serving” in PEFT schemes (like LoRA).
3. In Table 6, the paper shows higher accuracy for FP+delta compared to GPTQ. Again, I would rather like to see memory Vs. accuracy tradeoff in such comparisons as a function of number of clients. The fact is FP+delta does not have the same memory as GPTQ (please current me if I am wrong).
4. It would be nice to have a performance model for the latency of the decoding as a function of client numbers. This is also missed and needs to be include as this is the main claim of the paper.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please check the "weaknesses" section. I would be happy to discuss and change my score.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: I think the authors should define and present the limitations of the method more clearly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the kind review! Please find below our point-by-point response regarding your feedback:
> 1, The author claim that BitDelta shows the potential redundancy of information added during fine-tuning. However, this is not a new finding and almost all PEFT approaches (for example LoRA) are based on this fact.
We agree that the redundant fine-tune information angle is not new, hence our analogy that LoRA enforces structure **during training**. However, we believe there is nontrivial novelty in using this idea to accurately quantize the weight delta **post training to 1-bit** for full-parameter fine-tuned models, and successfully translating this reduced memory consumption to a >10x wall clock speedup in multi-tenant settings.
> 2. It seems that the paper completely missed the full fine-tuning costs and just measured the memory/latency of the serving step. However, I would suggest to have “apple to apple” comparisons and compare the fine-tuning+serving cost (including memory and runtime) of “full-fine tuning+1bit optimization+serving” against “fine-tuning+serving” in PEFT schemes (like LoRA).
We respectfully disagree with the statement's premise -- to clarify, we do not fine-tune our own models, and instead target existing popular SOTA fine-tuned models on platforms like HuggingFace, as this setting is where our methodology’s downstream applications are most relevant. As such, the effective cost of such models is amortized over many people around the world. If we were in a different setting and needed to fine-tune our own models from scratch (eg. in a niche domain), then we would agree that a full apples-to-apples comparison including the cost of fine-tuning would be more appropriate.
However, the reviewer raises an interesting point and we plan to clarify this in the final manuscript to ensure the value proposition of our work is more accurately understood.
> 3. In Table 6, the paper shows higher accuracy for FP+delta compared to GPTQ. Again, I would rather like to see memory Vs. accuracy tradeoff in such comparisons as a function of number of clients. The fact is FP+delta does not have the same memory as GPTQ (please current me if I am wrong).
The reviewer is correct in that $FP16+\Delta$ has a different memory footprint than $GPTQ$. The crossover point would be about 5 models. Serving separate quantized models is mainly relevant in low-batch (low number of clients) and low-memory settings. However, as shown in Table 6, BitDelta can also be applied to quantized base models, which is a viable solution in such settings. For example, when serving 3 models, it is preferable to represent them as one 8-bit base model plus three 1-bit deltas, instead of three separate 4-bit models, in terms of both accuracy and memory.
Nonetheless, BitDelta is not intended to be useful in this regime (low-batch + low-memory), and the quantization ablation mainly serves to show the robustness of the method in terms of accuracy. However, the remark that $FP16+\Delta$ outperforms $GPTQ$ may mislead readers to overgeneralize, which we will address in the final manuscript.
> 4. It would be nice to have a performance model for the latency of the decoding as a function of client numbers. This is also missed and needs to be include as this is the main claim of the paper.
The End-to-End Latency section (Figure 5) addresses decoding speed as a function of batch size (number of clients). We're more than happy to provide additional results if this is not what the reviewer is expecting.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: Thanks for your answers. The authors claim that they do not present a fine-tuning scheme, but this rather a "serving" approach. I think they should highlight this as the main message of the paper and re-defining the evaluation approach. I would stick to my current score.
---
Rebuttal 2:
Title: Followup
Comment: We thank the reviewer for the response. We will make sure to properly position the paper in the final manuscript. Though, we are wondering if the reviewer could clarify how they think our evaluation approach should be re-defined. Our baselines (For accuracy, fine-tuned models without BitDelta applied. For latency, serving fine-tuned models separately.) are fairly reasonable in this context. Are there specific aspects that the reviewer thinks are misaligned?
Given that we have additionally addressed the other concerns (fine-tuning costs, latency results, etc.), we would greatly appreciate it if the reviewer could reconsider their score. | Summary: Aiming at storage and serving overhead caused by multiple finetuned LLMs for various downstream tasks, this paper proposes a memory-friendly model compression method namely BitDelta which binaries the delta of each weight matrix and uses self-distillation to learn optimal scaling factors.
Strengths: BitDelta innovatively decomposes a fine-tuned model into its pretrained version and an additional weight matrix delta and then successfully binary delta to reduce memory overheads while preventing large performance degradation.
Weaknesses: The paper lacks innovation and has several points that do not hold up under scrutiny.
For the first contribution, the paper proposes decomposing multiple fine-tuned models into a shared pretrained model and their respective deltas, and then binarizing the deltas. The specific decomposition method is not described in the paper. Based on experience, this can be understood as using LoRA to fine-tune LLMs and binarizing the learned low-rank matrices. This is not novel, as there are already existing low-bit model fine-tuning methods, such as Q-LoRA and QA-LoRA, which are more memory-friendly and do not require retaining an additional fp16 model.
Regarding the second contribution, which involves using self-distillation to learn scaling factors, the paper's description is insufficient. For example, the initialization method of these factors is not well-explained. Furthermore, if the entire fp model's output is used to supervise the quantized model, the computational resource consumption is high. The paper could explore layer-wise optimization mechanisms to address this issue.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) The author should describe the advantages of BitDelta compared to PEFT methods, such as QLoRA, QA-LoRA, and LoftQ. Because they don't require saving the fp16 pretrained model, which results in more efficient storage utilization.
2) More detailed description should be given, such as GPTQ+Δ in Table 6. Additionally, the performance of FP16+Δ being superior to 4-bit GPTQ and 2-bit Quip does not entirely prove that directly quantizing the base model is impractical, as INT8-RTN still shows better performance. If we consider 8-bit GPTQ, or other sota PTQ methods such as Omniquant, AWQ, it might also perform better while reducing memory usage by half. Therefore, it is necessary to provide additional arguments from other perspectives to explain why directly quantizing the base model is not preferable.
Extensive additional experiments are not necessary, but it is important to clearly explain the aforementioned issues.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the kind review! Please find below our point-by-point response regarding your feedback:
We would first like to clarify a misconception that significantly impacts the evaluation of our work: **we do not fine-tune our own models** in this paper. Rather, we take existing popular SOTA full-parameter fine-tuned models, and compress the weight delta between it and its underlying base model. This is done in a hardware friendly way, such that inference on the deltas is fast and performant.
Fundamentally, the goal of BitDelta is to unlock the efficient multi-tenant serving of **existing SOTA full-parameter fine-tuned models** on platforms like HuggingFace. Methods like QLoRA are impactful in that they democratize fine-tuning in resource constrained settings, trading off accuracy for decreased memory footprint. This is most useful in settings like fine-tuning your own models locally (eg. when you’re in a niche domain) and on the edge. Because of the differing target use cases, it’s hard to make a useful comparison that adequately captures the value propositions of both methods.
W1:
> The specific decomposition method is not described in the paper.
We describe the decomposition method in Section 3.1, lines 117-126. The base model and fine-tuned model weights are known apriori, and the decomposition is an element-wise subtraction.
> Based on experience, this can be understood as using LoRA to fine-tune LLMs and binarizing the learned low-rank matrices. This is not novel, as there are already existing low-bit model fine-tuning methods, such as Q-LoRA and QA-LoRA, which are more memory-friendly and do not require retaining an additional fp16 model.
We do not fine-tune the LLMs ourselves, please see our beginning statement.
W2:
> Regarding the second contribution, which involves using self-distillation to learn scaling factors, the paper's description is insufficient. For example, the initialization method of these factors is not well-explained.
We apologize for the unclear presentation and will revise lines 122-126. The initialization of the scaling factors is set to the mean of the absolute values of the per-tensor weight entries. This is done to minimize weight quantization error with respect to L2 norm.
> Furthermore, if the entire fp model's output is used to supervise the quantized model, the computational resource consumption is high. The paper could explore layer-wise optimization mechanisms to address this issue.
In Section 3.2 we describe the methodology cost, and conclude that the total computational cost is fairly similar to other PTQ methods. To be clear, the methodology assumes the existence of a trained base model, and a trained fine-tuned model, and does not fine-tune in the sense that Q-LoRA + its variants do.
One potential optimization not mentioned would be to precompute the teacher model logits (which should not be a storage issue given the low sample lengths + number of samples), which would halve the memory footprint. Nonetheless, we agree that it may be possible to further optimize this. For example, we could search for the optimal per tensor scaling factor through a grid search, loading one tensor at a time, similar to AWQ [1].
> Q1: The author should describe the advantages of BitDelta compared to PEFT methods, such as QLoRA, QA-LoRA, and LoftQ. Because they don't require saving the fp16 pretrained model, which results in more efficient storage utilization.
We do not fine-tune the LLMs ourselves, please see our beginning remark. Additionally, as shown in Table 6, BitDelta also works well in conjunction with quantized base models.
> Q2: More detailed description should be given, such as GPTQ+Δ in Table 6. Additionally, the performance of FP16+Δ being superior to 4-bit GPTQ and 2-bit Quip does not entirely prove that directly quantizing the base model is impractical, as INT8-RTN still shows better performance. If we consider 8-bit GPTQ, or other sota PTQ methods such as Omniquant, AWQ, it might also perform better while reducing memory usage by half. Therefore, it is necessary to provide additional arguments from other perspectives to explain why directly quantizing the base model is not preferable. Extensive additional experiments are not necessary, but it is important to clearly explain the aforementioned issues.
We did not intend to suggest that directly quantizing the base model is not preferable, and we apologize for any confusion. Our goal was to demonstrate the orthogonality of BitDelta to quantizing the base model. Given a more stringent memory constraint, providers are able to quantize the base model in addition to applying BitDelta, without significant degradation in performance. In fact, applying 8-bit quantization might have such a negligible impact on accuracy that it could be preferable purely from an inference speed perspective, regardless of memory constraints.
Our corollary statement ($FP16+\Delta$ outperforms $GPTQ$) was an interesting observation we noticed. However, we acknowledge this may not necessarily be true for stronger baselines (AWQ, Omniquant, etc.). We will moderate our claims in the final manuscript to better reflect the
scope of this finding.
[1]: AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration: https://arxiv.org/abs/2306.00978 | Summary: This paper introduces Bitdelta, a method that enables quantization with just 1 bit. The main idea is to compress the weight delta into a scalar and a binary matrix. The experimental results demonstrate that Bitdelta achieves better performance compared to other techniques.
Strengths: - Easy to read and well-structured.
- The authors effectively demonstrate that their proposed technique, Bitdelta, performs well compared to existing quantization techniques.
- It is expected that Bitdelta can be extended to areas such as multi-tenant serving systems.
Weaknesses: - A significant contribution of Bitdelta lies in its ability to reduce memory consumption through 1-bit quantization. However, the paper lacks detailed evidence to support this claim. It is necessary to compare memory consumption for each compression technique, but currently, only Bitdelta is shown.
- It would be better for the paper to mention and explain the figures within the proper location of text. For example, Figure 1 and Figure 4 are included but not mentioned in the text. It is hard to understand without sufficient explanation.
- Table 1 lacks information about the base models for Bitdelta and SVD. It should clearly specify whether these are based on Llama-7B or Llama-7B Chat.
Technical Quality: 2
Clarity: 4
Questions for Authors: - I am curious about the clear differences between Bitdelta and other compression technologies. For example, can Bitdelta, which adopts the Post-Training Quantization (PTQ) method, be used in combination with other PTQ techniques? Also, what happens if you combine Bitdelta with compression techniques like pruning? It would be beneficial to discuss the relationships between various compression technologies
- (lines 165-167) The authors said that generation latency is proportional to the GPU memory used. How can this claim be proven? It would be helpful to mention references or provide supporting data
- I think BitNet has a similar purpose with the design of 1-bit quantization. Comparing Bitdelta to BitNet is required
Confidence: 4
Soundness: 2
Presentation: 4
Contribution: 3
Limitations: I think the paper will be strengthened with the measurements on GPU (or memory) consumption are added
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the kind review! Please find below our point-by-point response regarding your feedback:
> A significant contribution of Bitdelta lies in its ability to reduce memory consumption through 1-bit quantization. However, the paper lacks detailed evidence to support this claim. It is necessary to compare memory consumption for each compression technique, but currently, only Bitdelta is shown.
We would appreciate further clarification on this point -- is the reviewer referencing compression techniques like quantization (GPTQ,AWQ), or potentially the SVD baseline in Table 1? Nevertheless, we are considering delta compression in this work, which differs from classic quantization methods.
> It would be better for the paper to mention and explain the figures within the proper location of text. For example, Figure 1 and Figure 4 are included but not mentioned in the text. It is hard to understand without sufficient explanation.
> Table 1 lacks information about the base models for Bitdelta and SVD. It should clearly specify whether these are based on Llama-7B or Llama-7B Chat.
We apologize for the unclear presentation and thank the reviewer for the suggestion. We will fix these issues in the final manuscript. The Table 1 results are based on compressing the weight delta between Llama-7B and Llama-7B Chat, so the resultant model is an approximation of Llama-7B Chat.
> I am curious about the clear differences between Bitdelta and other compression technologies. For example, can Bitdelta, which adopts the Post-Training Quantization (PTQ) method, be used in combination with other PTQ techniques? Also, what happens if you combine Bitdelta with compression techniques like pruning? It would be beneficial to discuss the relationships between various compression technologies
In Table 6 we have results where we apply PTQ in conjunction with BitDelta -- we found that the two methods are fairly orthogonal. We expect other methods (pruning, etc.) that also apply to the base model to also be fairly orthogonal.
Regarding further compression of the delta, it may be possible to employ techniques such as vector quantization and incoherence processing (similar to QuIP# [1]) to achieve accurate sub 1-bit deltas. However, we have to be cognizant of the hardware friendliness of these methods, and whether the reduced delta size outweighs the associated kernel overhead.
> (lines 165-167) The authors said that generation latency is proportional to the GPU memory used. How can this claim be proven? It would be helpful to mention references or provide supporting data
The memory bound nature of LLM decoding is well documented [2]. During the decoding phase (on modern GPUs), the time taken to transfer weights and KV caches to GPU registers far outweighs the time needed to compute the associated skinny matrix multiplications.
AWQ [3] leverages this to translate a reduction in memory footprint (through weight quantization) to a ~3x wall clock speedup. We likewise translate a reduction in memory footprint (through representing multiple fine-tuned weights with just one base weight and multiple compressed deltas) to a ~10x wall clock speedup when concurrently serving 16 models.
> I think BitNet has a similar purpose with the design of 1-bit quantization. Comparing Bitdelta to BitNet is required
BitNet fundamentally has a different purpose in that they propose a new architecture based on 1-bit weight entries for LLM **pretraining**, with the goal of showing superiority over conventional 16-bit pretraining. BitDelta differs in that it compresses the weight delta of two **existing pretrained** 16-bit models to 1-bit, while keeping the base model in 16-bit precision, with the goal of unlocking efficient multi-tenant serving of full-parameter fine-tuned models. The two are related only in that they both use $W_\text{INT1}$ matrix operations.
[1]: QuIP#: Even Better LLM Quantization with Hadamard Incoherence and Lattice Codebooks: https://arxiv.org/abs/2402.04396
[2]: LLM Inference Unveiled: Survey and Roofline Model Insights: https://arxiv.org/abs/2402.16363
[3]: AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration: https://arxiv.org/abs/2306.00978
---
Rebuttal Comment 1.1:
Comment: - Thank you for your efforts in addressing the weaknesses and questions. Based on the authors’ responses, I understand where I was confused, and I have updated my review decision (borderline accept -> weak accept).
- Also, although the authors logically explained the memory consumption aspects, the actual memory consumption of the technique is affected by various factors; therefore, providing the actual numbers would be beneficial. | Summary: The paper proposes to quantize the weight delta of a fine-tuned LLM to 1-bit and observes that the model quality only drops a little.
During the binarization step, it requires calibrating the scaling factor with a few hundreds of distillation steps. This is less than a full fine tuning. Evaluation show that the proposed method produces higher model quality than other post-training quantization techniques such as GPTQ and QuIP.
Strengths: The paper discovered a nice trade-off between fine-tuned model storage size and model quality. Trading 16x lower weight size by only storing the 1-bit delta for the limited quality drop as reported in the paper is impressive. Importantly, the binarization step has a near-post-training cost, instead of fine-tuning from the beginning. The paper did solid ablation studies on how important the scaling factor calibration is, which well described the effect of each component in the proposed method. The latency study is also appreciated.
Weaknesses: While requiring low storage size, the proposed method introduces an extra binary-float matmul during inference. Although it is indeed a special kernel and can have much lower inference time than the float matmul, the overhead will become more significant when the base model is low-precision.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Line 133 mentioned the scale distillation is robust to the choice of calibration dataset. Does it mean that it can also use synthetic data? It will be an improvement if the paper presents such an ablation.
2. Line 226: can the quantized delta be merged with the quantized base model? For example, by synchronizing the scaling factors.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The paper does not have negative societal impact as far as the reviewer can tell.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the kind review! Please find below our point-by-point response regarding your feedback:
> While requiring low storage size, the proposed method introduces an extra binary-float matmul during inference. Although it is indeed a special kernel and can have much lower inference time than the float matmul, the overhead will become more significant when the base model is low-precision.
The delta kernel overhead is indeed nontrivial in the regime where the base model is low-precision and $B$ (the number of served models) is small. Similar solutions (S-LoRA, etc.) suffer from the same issue and are actually slower than BitDelta when $B \leq 4$. Such multi-tenant solutions work best in higher batch settings.
In terms of overall throughput, with a 16-bit base model (shown in Figure 5), BitDelta outperforms the naive method of running each model separately for all $B>1$. We expect a similar result for quantized base models, though potentially with a higher crossover point.
> 1. Line 133 mentioned the scale distillation is robust to the choice of calibration dataset. Does it mean that it can also use synthetic data? It will be an improvement if the paper presents such an ablation.
Intuitively this seems very doable, considering scale distillation already does well with generic internet data. We will try to include this in the final manuscript.
> 2. Line 226: can the quantized delta be merged with the quantized base model? For example, by synchronizing the scaling factors.
We don’t see an easy way to do this losslessly, but we’re happy to chat more about this. To us it seems difficult to combine quantized weight matrices that have different scale factors.
---
Rebuttal Comment 1.1:
Title: Thank the authors for response
Comment: Thank the authors for the response. I have read all replies and comments from other reviewers. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Unraveling the Gradient Descent Dynamics of Transformers | Accept (poster) | Summary: This paper proves that with appropriate initialization GD can train a Transformer model with either Softmax or Gaussian kernel to achieve a global optimal solution. Besides, this paper highlights the Gaussian attention kernel exhibits much favorable behavior than Softmax in certain scenarios.
Strengths: 1. This paper gives a theoretical analysis of training dynamics of Transformers, and provides conditions to guarantee global convergence.
2. The comparison between Gaussian and Softmax kernels is interesting.
Weaknesses: 1. It is unclear what new insights could be gained from the global convergence result Theorem 2, given that Wu et al [2024] already have one. The weight $W^O$ is trainable in Wu et al [2024], which is in line with practice, but $W^O$ is fixed in the present paper.
2. The comparison between the Gaussian and Softmax kernels only updates $W^Q$.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The conditions in Theorem 2 is different from those in Wu et al [2024]. How do these conditions compare with each other? Which one is weaker? What new insights can we obtain?
2. The authors emphasize that one important feature of the present work is that they delve in the roles of individual variables. Why is this important? Can the results obtained with one variable trainable and others fixed explain their roles when they are simultaneously updated?
3. On line 286, is it implicitly assumed that the global optimum should have zero loss? Why must this be true?
4. Lines 376-377 claim that the results in this paper explain why transformers perform exceptionally well. Can you clarify the explanation? How does it follow from the results of the paper?
5. There are several sentences like "only updating $W^V$ already leads to global convergence". Are they suggesting that it is easier to optimize more variables? How should we understand these sentences?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Not quite. See the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >>**Your comment**: The new insight we can gain compared to Wu et al [2024]; the fixed $W^O$ in optimization.
**Response**: See the general rebuttal at the top of the page **Comment 1**.
>> **Your comment**: The comparison between the Gaussian and Softmax kernels only updates $W^Q$.
**Response**: We'd like to clarify that we also studied the role of the variable $W^V$ in Gaussian and Softmax kernel and found them to be the same. We have commented on the similarity between the two Transformers in **Remark 3**. Specifically, the conclusions in Theorem 1 and Theorem 2 hold for Transformers with Gaussian attention, and the linear decrease of the loss function also comes from $W^V$ when $W^Q,W^K,W^V$ are updated. Since the conclusion and proof technique are very similar to Theorem 1 and Theorem 2, we did not include the details in the paper. To avoid confusion, we will add a corollary to clearly state the similarity of $W^V$ in both Transformers in our revised version.
>>**Your question**: The comparison of assumptions in Wu et al [2024] and our paper.
**Response**: See the general rebuttal at the top of the page **Comment 2**.
>>Your question regarding the importance of analyzing individual variables. Can the results obtained with one variable trainable and others fixed explain their roles when they are simultaneously updated?
**Response**:
**The importance of unravelling individual variables**: First, unravelling the individual role of each variable helps us understand the optimization bottlenecks in Transformers [2,3]. For example, in [2], it is shown that the rank collapse of the attention kernel can occur when the variable $W^Q,W^K$ are updated, which can lead to failure in optimization. Second, acceleration optimization algorithms can be designed based on the understanding of the individual role of each variable [3,5]. In these works, Nyström method is used in gradient computation to greatly accelerate the training. The algorithm design is specific to the training dynamics of individual variables in Transformers. Third, analyzing the training dynamics of individual variables leads to a new network design that improves the optimization performance [3,4,5]. In these works, attention structures other than Softmax attention are proposed to improve the generalization and computational efficiency.
**Simultaneously update**: Yes, the property obtained when analyzing a single variable updates does carry over to the situation when this variable is updated to other variables simultaneously. The reason is as follows: Our over-parameterization structure and initialization ensure all the variables **always stay in a locally near-convex region**. Within this region, the optimization landscape is relatively smooth, and the partial gradient of any single variable does not change much when other variables are updated, thus analyzing the gradients of different variables separately is able to explain the role of each variable when they are updating simultaneously.
>>Your question regarding the existence of global optimum in line 286.
**Response**: Yes, it is implicitly assumed that the global optimum solution has zero loss. This is true because of the over-parameterization of the network, where the network size is $\Omega(Nn)$ ($N$ is the sample size). When there are more parameters than samples, there exists a solution that can completely fit all the samples. Further, in some other works with similar settings, the existence of global optimal solution can also be derived from the limit of a Cauchy sequence (See [8] Appendix B.7). To make the proof more rigorous, we will add a similar derivation of the Cauchy sequence in our revised version.
>>Your question regarding the statement "transformers perform exceptionally well".
**Response**: We claim that our result explains why Transformers perform exceptionally well because we derive a **global convergence** analysis of training Transformers (Theorems 1 and 2). These two theorems show that even the plain gradient decent algorithm can find the global optimal solution with a network size of $\Omega(Nn)$, allowing the training loss to decrease to $0$. This provides theoretical proof for the outstanding training performance of Transformers.
>>Your question regarding the sentences like "only updating $W^V$ already leads to global convergence".
**Response**: No, it does not suggest that it is easier to optimize more variables. As we aim to scrutinize the role of each variable in the attention kernel, we found that the global convergence is dominated by $W^V$, which prompted us to later focus on studying the **role of $W^Q$ and $W^K$** in optimization. These sentences are meant to emphasize that the role of $W^Q, W^K$ remains unclear if updating $W^V$ alone already leads to global convergence (See Theorem 1,2), i.e, does updating $W^Q,W^K$ contribute to the linear convergence of loss function? Thus, these statements lead to our discussion on $W^Q$ alone in Theorem 3.
[1] Wu, Yongtao, et al. "On the convergence of encoder-only shallow transformers." Advances in Neural Information Processing Systems 36 (2024).
[2] Noci, Lorenzo, et al. "Signal propagation in transformers: Theoretical perspectives and the role of rank collapse." Advances in Neural Information Processing Systems 35 (2022): 27198-27211.
[3] Chen, Yifan, et al. "Skyformer: Remodel self-attention with gaussian kernel and nystr\" om method." Advances in Neural Information Processing Systems 34 (2021): 2122-2135.
[4] Choromanski, Krzysztof, et al. "Rethinking attention with performers." arXiv preprint arXiv:2009.14794 (2020).
[5] Lu, Jiachen, et al. "Soft: Softmax-free transformer with linear complexity." Advances in Neural Information Processing Systems 34 (2021): 21297-21309.
[6] Nguyen, Quynh N., and Marco Mondelli. "Global convergence of deep networks with one wide layer followed by pyramidal topology." Advances in Neural Information Processing Systems 33 (2020): 11961-11972.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. The response claims global convergence on the one hand, and initialization in a locally near convex region. How do you reconcile these two claims?
---
Reply to Comment 1.1.1:
Title: Response to Reviewer ynwg's question
Comment: First, let us clarify that global convergence in our theorems means **convergence to global optimal solution**. The definition is also used in other paper that analyzes the optimization of networks, e.g,[1]. And this definition is different from the traditional definition of "global convergence", which means that an algorithm converges (to certain solution) regardless of initialization. To avoid confusion, we will correct it to "convergence to global optimal solution" in our revised version.
Second, with the clarification above, we need to clarify that the condtion of initialization in near-convex condition **does not conflict** with our analysis of achieving global optimal solution in Theorem 1,2 and 3. Instead, it is an **essential condition** for deriving the result in our paper. This type of claim (local initialization to achieve global optimal solution) has been common [1,2,3], though in some works it is not pointed out directly.
[1] Nguyen, Quynh N., and Marco Mondelli. "Global convergence of deep networks with one wide layer followed by pyramidal topology." Advances in Neural Information Processing Systems 33 (2020): 11961-11972.
[2] Wu, Yongtao, et al. "On the convergence of encoder-only shallow transformers." Advances in Neural Information Processing Systems 36 (2024).
[3] Song, Bingqing, et al. "Fedavg converges to zero training loss linearly for overparameterized multi-layer neural networks." International Conference on Machine Learning. PMLR, 2023. | Summary: The authors establish different convergence theorems on the training of a single layer transformer with different trainable weight matrices and kernel functions. They prove that, under certain conditions, a one-layer Transformer with a Gaussian kernel converges faster than one with a Softmax kernel. Finally, they conduct empirical experiments to show the effectiveness of Gaussian kernel compared with Softmax kernel, validating their theory.
Strengths: The paper is overall clearly written. The authors delve into the role of different weight matrices in Transformer optimization and establish different convergence theorems with regard to different trainable weight matrices. This aspect is novel. Their theory also suggests that Gaussian kernel may be more effective than the widely used softmax kernel, which if validated in larger models, can have a great impact on the field.
Weaknesses: Minor points:
1. It may be beneficial to include insights in the main text about the critical steps in the proof that result in the different convergence rates of the Gaussian and Softmax kernels in Theorem 3.
Major points:
1. The experimental results in Figure 2 suggest that the Transformer with a Gaussian kernel converges faster. However, the difference in convergence seems more like a difference in the constant factor rather than a difference in the convergence rate.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The authors construct a compelling example to illustrate the failure of a Transformer with a Gaussian kernel, but it requires a specific initialization form. To achieve a similar convergence rate for the Softmax kernel, how many additional conditions are required?
2. I don’t fully understand why optimizing the query matrix alone leads to similar dynamics as when optimizing both the query and the key matrix. Could you clarify this?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The author has discussed some of the limitations in the paper. Specifically, the authors explain why they choose to study a simple one-layer transformer with regression loss while in practice, transformer models are usually trained with cross-entropy loss and with multiple layers.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >>**Your comment**: The difference in convergence speed in Fig. 2 seems like constant level with same rate.
**Response**: Thank the reviewer for the comment. We need to clarify that it is **reasonable** that the difference in convergence rate in Fig. 2 is constant level. Further, we provide **empirical reason** for the different theoretical convergence rates in Theorem 3 by landscape visualization in Fig. 4.
Frist, Transformers with Gaussian and Softmax attentions theoretically should have **the same convergence rate** in our experiment setting in Fig. 2. Notice that the algorithm we use in Fig. 2 is updating **all the variables** in the attention ($W^Q,W^K,W^V$). As discussed in Remark 3, when all the variables are updated, both Transformers could achieve linear convergence rate with appropriate network size and initialization. Thus, the theoretical difference in convergence rate is constant level. However, due to the bottleneck of optimizing $W^Q$ (or $W^K$) in Softmax Transformer as illustrated in Theorem 3, Gaussian Transformer still converges **faster** than Softmax Transformer when $W^Q, W^K$ are included in optimization.
Second, in terms of the different theoretical convergence rate in Theorem 3 when **only $W^Q$ (or $W^K$)** is updated in both Transformers, we provide **empirical reason** for the different convergence rates in Theorem 3 by landscape visualization in Fig. 4. By showing the existence of **more local solutions** in Transformers with Softmax attention, we provide evidence for the comparison of the convergence rates: Compared to Transformer with Gaussian attention, a Transformer with Softmax attention is more likely to **obtain a sublinear convergence to stationary points**, not global optimum.
>>**Your comment**: The insights of critical proof steps that cause different convergence rates in Theorem 3.
**Response**: We thank the reviewer for the suggestion. We will add the proof sketch of Theorem 3 in the revised version. Here we will briefly summarize the critical steps.
**Step 1**: Derive the **closed form gradient** of loss function over variable $W^Q$. Please see Lemma 1 equation (23),(24) in line 778 for Softmax attention Transformer, and Lemma 6 equation (41), (42) in line 821 for Gaussian attention Transformer. Intuitively, the gradient of Softmax attention is much more complicated than Gaussian attention Transformer, which will lead to a more complicated landscape and more local solutions.
**Step 2**: **Analyze the gradient** of Transformers with both kernels with the **same initialization** Equation (12). For Gaussian attention Transformer, it can be iteratively shown during the gradient descent training: 1) The variable $W^Q$ is bounded; 2) The PL condition holds (i.e, the optimization landscape remains near-convex); 3) The loss function decreases linearly. (Please see Equation (52) in line 829.) For the Softmax attention Transformer, there is no guarantee that the PL condition holds during iterative gradient descent update. To illustrate this claim, we construct a concrete counterexample to show that the PL condition does not hold. (Please see line 284).
>>**Your question**: The authors construct a compelling example to illustrate the failure of a Transformer with a Gaussian kernel, but it requires a specific initialization form. To achieve a similar convergence rate for the Softmax kernel, how many additional conditions are required?
**Response**: Respectfully, we need to clarify that we **do not** construct an example to illustrate the failure of the Transformer with a **Gaussian kernel**. Instead, we provide a detailed example to illustrate that the Transformer with Softmax attention converges to a **stationary point** with a sublinear rate, but fails to achieve linear global convergence. (Please see Equation (14) in Theorem 2 and the failure to satisfy the PL condition in lines 284 and 285).
**Additional condition for Softmax Transformer**: For a Transformer with Softmax attention, the **PL condition or convexity of loss function** should be assumed during the **whole training phase** to achieve the same linear convergence rate as a Transformer with Gaussian attention. This condition is much more stringent than the condition for Gaussian attention, where the PL condition is assumed **only at initialization**.
>>**Your question**: I don’t fully understand why optimizing the query matrix alone leads to similar dynamics as when optimizing both the query and the key matrix. Could you clarify this?
**Response**: The similarity comes from the symmetry of $W^Q$ and $W^K$ in attention mechanism. The symmetry means the training dynamics/gradients of $W^Q$ and $W^K$ have **exactly the same** properties. Please refer to Appendix 1.2 Lemma 1 (3). We write down the closed form gradient of $W_h^Q$ over $f$ for the Softmax Transformer. Similarly, we can derive the gradient over $W_h^K$, with the only difference from Lemma 1 (3) is that, the $W_h^K$ term on the right side of the equation is replaced with $W_h^Q$. Thus, the gradients of $W^Q$ and $W^K$ have the **same** structure. As long as we initialize $W^Q$ and $W^K$ with the same properties, the training dynamics are also **the same** for both gradients. Intuitively, this stems from the fact that $W^Q$ and $W^K$ are in symmetric positions with respect to calculating their gradients in the attention head, see equation (3).
[1] Huang, Baihe, et al. "Fl-ntk: A neural tangent kernel-based framework for federated learning analysis." International Conference on Machine Learning. PMLR, 2021.
[2] Gao, Tianxiang, et al. "A global convergence theory for deep relu implicit networks via over-parameterization." arXiv preprint arXiv:2110.05645 (2021).
[3] Wu, Yongtao, et al. "On the convergence of encoder-only shallow transformers." Advances in Neural Information Processing Systems 36 (2024).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response! The authors have clarified the new insights and novel contributions compared with previous works. As a result, I am raising my score to 6. | Summary: This paper analyzes the convergence behavior of Transformer models with different attention mechanisms, specifically comparing Softmax and Gaussian kernel attention. The authors provide theoretical results on the conditions for global convergence and empirically demonstrate differences in optimization landscapes between the two attention types. The work aims to provide insights into why Transformers perform well and the potential advantages/disadvantages of Softmax attention.
Strengths: - The analysis of Gaussian kernel attention is insightful and novel. The authors show theoretically and empirically that Gaussian attention can lead to better convergence properties compared to Softmax attention. This provides valuable understanding of alternative attention mechanisms.
- The paper addresses an important question about the optimization dynamics of Transformers, which is crucial given their widespread use. The motivation to understand why Transformers work well and potential limitations of - Softmax attention is timely and relevant.
- The theoretical analysis is reasonably thorough, with the authors deriving conditions for global convergence for both Softmax and Gaussian attention under different update scenarios (Theorems 1-3). This helps formalize the intuitions about attention behavior.
Weaknesses: - The empirical evaluation is limited and doesn't fully validate the theoretical claims in practical settings. The experiments use simplified Transformer models on relatively small datasets (IMDB and Pathfinder). It would be more convincing to see results on larger, more complex Transformer architectures and standard NLP benchmarks.
- The paper's contribution relative to prior work is not clearly articulated. While the authors cite some related papers, they don't adequately explain how their results extend or differ from existing analyses of Transformer convergence, e.g., [1].
- The practical implications of the theoretical results are not sufficiently discussed. It's unclear how the insights about Gaussian attention could be applied to improve real-world Transformer models or training.
[1] Y. Huang, Y. Cheng, and Y. Liang. In-context convergence of transformers. arXiv preprint arXiv:2310.05249, 2023.
Technical Quality: 3
Clarity: 1
Questions for Authors: See weakness above.
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: The authors acknowledge some limitations in Section 6, noting that they require strong initialization and a large embedding size to obtain global convergence guarantees. They also mention the gap between their analysis and real-world scenarios. However, a more thorough discussion of limitations would strengthen the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >>**Your comment**: The empirical evaluation is limited and doesn't fully validate the theoretical claims in practical settings. The experiments use simplified Transformer models on relatively small datasets (IMDB and Pathfinder). It would be more convincing to see results on larger, more complex Transformer architectures and standard NLP benchmarks.
**Response**: Thanks the reviewer for the comment. First, we need to clarify that the goal of our paper is to investigate the **training dynamics** and **landscape** of Transformer model. The main purpose of our experiment is to **provide a concrete example** to illustrate the existence of local solution in Softmax attention. Similar experiments of **shallow** networks can be found in [1,2,9], which have settings similar to ours.
We agree that additional experiments on larger models could potentially provide further support for our theory. However, due to computational budget constraints, we are unable to complete these experiments during the rebuttal period. We will add results on these larger models at a later time.
>>**Your comment**: The paper's contribution relative to prior work is not clearly articulated. While the authors cite some related papers, they don't adequately explain how their results extend or differ from existing analyses of Transformer convergence, e.g., [4].
**Response**: We thank the reviewer for the comment. We will add the following detailed comparison with the related works in the revised version:
1. Comparison with [3,4]: These works analyze the convergence of in-context training, where a prompt is contructed with all the training samples and a single test sample. The goal of these works is to achieve the zero test loss (in expectation) by optimizing over the loss function modeled by the propmpt. However, our analysis is based on standard empirical loss minimization, which does not involve any prompt construction.
2. Comparison with [5,6,7,8]: This line of works analyze the global convergence of training over-parameterized fully connected networks. However, Transformer structure and the optimization landscape in our study are more complicated.
3. Comparison with [9]: This work is closely related to our work. However, the primary goal of [9] and our work is **different**. In [9], the main purpose is to **build a convergence theory** of shallow Transformer with realistic structure and initialization, but they do not provide roles for different matrices in the convergence. While in our paper, we focus on **investigating the role of each variables** within the attention mechanism in optimization. Further, we analyze **the convergence behavior of different attention kernels**, i.e, Softmax, Gaussian, which is not covered by [9].
>>**Your comment**: The practical implications of the theoretical results are not sufficiently discussed. It's unclear how the insights about Gaussian attention could be applied to improve real-world Transformer models or training.
**Response**: We thank the reviewer for the comment. Here we discuss the insights about the benefits of using a Gaussian kernel in training Transformer models. As discussed in [1], Gaussian kernels have a natural interpretation of assigning “attention” to different tokens, similar to Softmax attention. However, the Gaussian kernel performs intrinsic normalization, which allows the attention mechanism to have a more reasonable condition number than Softmax attention. Therefore, designing specific Transformers with Gaussian attention can greatly improve the stability of model training. For example, in [1] Table 1, the kernelized attention (Gaussian kernel) and Skyformer (modified Gaussian kernel) show better performance than Softmax attention. In [10], it is showed that the Gaussian Transfomer is more efficient and generalizes better than Softmax Transfomer.
[1] Chen, Yifan, et al. "Skyformer: Remodel self-attention with gaussian kernel and nystr" om method." Advances in Neural Information Processing Systems 34 (2021): 2122-2135.
[2] Tay, Yi, et al. "Long range arena: A benchmark for efficient transformers." arXiv preprint arXiv:2011.04006 (2020).
[3] Zhang, Ruiqi, Spencer Frei, and Peter L. Bartlett. "Trained transformers learn linear models in-context." arXiv preprint arXiv:2306.09927 (2023).
[4] Huang, Yu, Yuan Cheng, and Yingbin Liang. "In-context convergence of transformers." arXiv preprint arXiv:2310.05249 (2023).
[5] Allen-Zhu, Zeyuan, Yuanzhi Li, and Zhao Song. "A convergence theory for deep learning via over-parameterization." International conference on machine learning. PMLR, 2019.
[6] Du, Simon, et al. "Gradient descent finds global minima of deep neural networks." International conference on machine learning. PMLR, 2019.
[7] Nguyen, Quynh N., and Marco Mondelli. "Global convergence of deep networks with one wide layer followed by pyramidal topology." Advances in Neural Information Processing Systems 33 (2020): 11961-11972.
[8] Allen-Zhu, Zeyuan, Yuanzhi Li, and Zhao Song. "On the convergence rate of training recurrent neural networks." Advances in neural information processing systems 32 (2019).
[9] Wu, Yongtao, et al. "On the convergence of encoder-only shallow transformers." Advances in Neural Information Processing Systems 36 (2024).
[10] Lu, Jiachen, et al. "Soft: Softmax-free transformer with linear complexity." Advances in Neural Information Processing Systems 34 (2021): 21297-21309.
---
Rebuttal Comment 1.1:
Title: Thank you. Raise score from 5 to 6. More questions.
Comment: Thank you for your rebuttal. It fixed some of my concerns and I raised my score from 5 to 6.
On the other hand, I have some other further questions.
- In Remark 1, it seems that the only assumption for data and initialization is that $B_0$ is full rank. Then, by over-parameterization, i.e., $D = \Theta (N n)$, the loss landscape is nearly convex around initialization. This sounds very similar to NTK or a lazy learning regime. I wonder whether there are any feature learning or rich learning regime effects in your analysis.
- Is there any idea about the generalization with some assumptions on data distribution?
- Why Gaussian kernel is $-\\|XW_Q - XW_K\\|_2^2$ rather than $(XW_Q W_K^T X^T)^2$? From my perspective, $-\\|XW_Q - XW_K\\|_2^2 \approx C + 2 XW_Q W_K^T X^T$ with some constant $C$, which is not far different from the standard kernel.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer cxoC's questions
Comment: We thank the reviewer for the feedback and raise in score!
**Comment**: In Remark 1, it seems that the only assumption for data and initialization is that $B_0$ is full rank. Then, by over-parameterization, i.e. $D=\Omega(Nn)$, the loss landscape is nearly convex around initialization. This sounds very similar to NTK or a lazy learning regime. I wonder whether there are any feature learning or rich learning regime effects in your analysis.
**Response**: Yes, our optimization analysis is similar to the NTK regime with lazy update. Our current theory does not consider feature learning or the rich learning effect. However, we believe it is possible to extend our analysis towards the feature learning regime. For example, the high-dimensional embedding can be further modeled by a function that extracts data features, e.g, networks, allowing the attention head to be interpreted as computing the "correlation" of extracted features. We plan to investigate the feature learning regime in our future work.
**Comment**: Is there any idea about the generalization with some assumptions on data distribution?
**Response**: Yes, generalization analysis can be derived from our theory with certain assumptions on data distribution and model weights, although it is not analyzed in this paper as we primarily focus on the optimization landscape. As a concrete example, it is possible to extend our analysis by following the framework in [1], where the data is balanced and a binary classification problem is considered. We can further derive a similar generalization bound as [1] Theorem 1.
**Comment:** The formula of Gaussian kernel.
**Response**: Respectfully, we would like to clarify that the formula of Gaussian kernel in your comment is missing an additional activation function. With different activations included, the Gaussian kernels (defined in [2, 3] and equation (11)) and Softmax kernels differ significantly, as the Softmax activation considers the entire row of the attention matrix, whereas the Gaussian does not.
Specifically, the $k$-th row and $j$-th column of Gaussian attention is given by (see equation (11)):
$$S\left(W\_h^Q, W_h^K ; X_i\right)\_{k j}=\operatorname{\exp}\left(-\frac{1}{\sqrt{d}}\left(X\_{ik\cdot} W_h^Q-X\_{ij\cdot} W\_h^K\right)^2\right)$$.
The Softmax attention is given by:
$$S\_{ih}:=\text{Softmax}\left(\frac{X\_{i} W_h^Q\left(X\_{i} W\_h^K\right)^{\top}}{\sqrt{d}} \in \mathbb{R}^{n\times n}\right).$$
Notice that the entry in Gaussian kernel is only related to the $k$-th and $j$-th token in sample $X_i$. In contrast, in Softmax attention, each entry is also related to the entries in the same row, since the Softmax activation computes the regularization based on a row. This leads to a more complex optimization landscape and makes the convergence more challenging. These two different kernels also result in different performance on some tasks [2, 3].
[1] Li, Hongkang, et al. "A theoretical understanding of shallow vision transformers: Learning, generalization, and sample complexity." arXiv preprint arXiv:2302.06015 (2023).
[2] Lu, Jiachen, et al. "Soft: Softmax-free transformer with linear complexity." Advances in Neural Information Processing Systems 34 (2021): 21297-21309.
[3] Chen, Yifan, et al. "Skyformer: Remodel self-attention with gaussian kernel and nystr" om method." Advances in Neural Information Processing Systems 34 (2021): 2122-2135. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for the comments and suggestions. We will summarize the strength of our work from reviewers as following:
1. The analysis of Transformer is under realistic setting, and requires no strict data assumption.
2. We delve into each variables within attention kernels. The analysis on Transformer landscape and comparison between different attention kernels are novel and significant.
4. The analysis is non-trivial and requires sufficient techniques.
Regarding the weakness and questions, we will address the several comments about the comparison with [1] here. Please see our detailed response individually for other comments. We are glad to answer your further questions.
**Comment 1**: The new insight of our Theorem 2 compared to [1]; the fixed $W^O$ in our paper.
**Response**: We thank the reviewer for the comment. We state the differences between [1] and our work in terms of optimization algorithm and discuss the new insight we can gain from our setting.
**Difference in optimization algorithm**: In [1], all the variables ($W^Q,W^K,W^V,W^O$) are updated to build a convergence theory, while in our work, we consider updating $W^Q,W^K,W^V$ in Theorem 2. We find that including $W^O$ in the analysis will **hide the true role** of $W^Q,W^K,W^V$ in optimization (see the explanation in the following paragraph). Given that we aim to unravel the role of different variables **within the attention head** in the convergence analysis, we leave out $W^O$. However, we need to point out that if we include $W^O$ as a variable, we can still achieve the same convergence rate as in Theorem 2.
**Difference in insight**: In Proposition 1 of [1], the linear decrease of the loss function **only** comes from updating the **output layer $W^O$**, In our work, however, the linear decrease **comes from $W^V$ in attention head**. Our result provides a **different** insight from [1]: Optimization on the **attention head alone** can achieve linear convergence rate. In our analysis, the same linear convergence rate can be derived if we include $W^O$ in the optimization, but it remain unclear whether $W^V$, or even the attention head contributes to the linear decrease since updating $W^O$ alone can already lead to the same result. Thus, leaving out $W^O$ makes our insight **more clear** regarding the role of $W^Q,W^K,W^V$.
**Comment 2**: Comparison of assumptions in our Theorem 2 and [1]. New insight from the conditions.
**Response**:
**Comparison of conditions**: The conditions in [1] and our Theorem both consist of two parts: the network size and the scale of initial weights. We will compare the two conditions below:
(1) Network size: [1] considers the single head attention mechanism, which requires $D\geq n,d\geq N$. It means the embedding dimension and model size are lower bounded by sequence length, sample size, respectively. In our work, we require $Nn\leq HD$, while $d$ can be small. If we compute the total number of neurons in each variable of the attention mechanism in $H=1$ case, then the lower bound of total neuron number in our result is $\Omega(Nn)$, which is **exactly the same** as [1]! This result shows that our network size lower bound is **consistent** with the literature.
(2) Scale of initial weights: The initialization conditions in two papers are very similar; it is **almost equally difficult** to satisfy the initialization conditions in two papers. The initialization condition in (8) implies that: 1. Both works require the smallest singular value of the attention head $B_0$ ($\underline{\lambda}^B$ in our paper) not to be too small; 2. Both works require the scale of each $W_h^Q, W_h^K$, and $W^V$ not to be too large; 3. Both works require the initial weight not to be too far from the global optimal solution; 4. Our Equation (8) requires the scale of $W^O$ to be large, while in Wu et al. [2024], the scale of $W^O$ has an **upper bound**. From above comparison, we can conclude that the initial conditions in two papers are **very similar**, except for the requirement for the scale of $W^O$, which is opposite. Further, we can verify that the gap between lower bounds for the smallest singular value of attention head ($\bar{\lambda}^B$ in our paper) is constant level, as well as the gap between the upper bounds for the scale of $W^Q,W^K,W^K$. The only difference is the scale of $W^O$, which results from the difference technique between papers: The linear decrease of loss function in [1] comes from updating $W^O$, while in our paper, the linear decrease of loss function in our paper comes from updating $W^V$. Please refer to the response to the previous question (**Difference in insight**).
**Regarding the new insight**: As we discussed in the previous point, the new insight we can derive from Theorem 2 is that: Optimization on the **attention head alone** can achieve a linear convergence rate. Please refer to the response to the previous question (**Difference in insight**).
[1] Wu, Yongtao, et al. "On the convergence of encoder-only shallow transformers." Advances in Neural Information Processing Systems 36 (2024). | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models | Accept (poster) | Summary: The paper proposes to mitigate task interference during multimodal instruction tuning with a mixture of experts, in both the language and the encoder side. The paper is well written, contains insightful analysis and shows improvements over baselines.
Strengths: - Mixture of experts is an important topic that is heavily studied in LLMs but little in multimodal models.
- The problem of task interference is an important problem that is more present in a multimodal setting.
- The paper is well written and easy to follow.
- The proposed approach is well motivated with insightful analysis and show improvements over baselines
Weaknesses: 1. While the paper shows improvements over different baselines in Tab.3, the scores are still lagging behind, compared to methods without MoE, such as LLaVA (which the paper builds on top of). Knowing that the proposed approach have significantly more parameters, pretrained for more steps (stage 1) and use different visual encoders.
2. The paper claims that “This means the language experts in our MoLE module gradually specialize in distinct task domains during training.” However the visualization does not support that. For instance, in Fig.5 E4 is used in most tasks types, while E1 is relatively less used. But it is not clear if we can map different experts to different task groups.
3. The paper is not evaluated on common and recent multimodal benchmarks: SEED, MME, MM-Vet, POPE, VQAv2 … that are considered in other methods like LLaVA.
4. The design of MoE are very different for the LLM and visual encoders. Each visual encoder is considered as an expet. Did the authors experiment with typical MoE (replicating FFN) in a single visual encoder?
5. The papers used deformable cross-attention, but I did not find any experiment to support this design choice, compared to using simple cross-attention. Did the authors conduct this experiment?
6. The paper states that load balancing did not help. Any insights on why this is the case? knowing that these losses are typically used in most MoE papers. Also, did the authors encounter any instabilities during training? It is important to discuss these in the paper, as instability is a major problem in MoEs training.
7. The paper main novelty seems in applying MoE also in the vision encoders. This question is also explored in previous works with slightly different context such as [1] and [2]. I found the novelty limited in this regard.
[1] Mustafa, Basil, et al. "Multimodal contrastive learning with limoe: the language-image mixture of experts." Advances in Neural Information Processing Systems 35 (2022): 9564-9576.
[2] Shen, Sheng, et al. "Scaling Vision-Language Models with Sparse Mixture of Experts." Findings of the Association for Computational Linguistics: EMNLP 2023. 2023.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please the weaknesses section (e.g. 4-5-6)
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Some limitations and societal impacts are discussed in the paper or the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** Lag behind LLaVA with more parameters, vision encoders, and training time.
**A1:** The original LLaVA includes much fewer tasks than our MoME for training and evaluation, we reported only the results on the shared tasks in Table 3 of the manuscript. **Thus, it is improper and inaccurate to conclude that our MoME lags behind LLaVA.**
In the following, we evaluate the original LLaVA model on our Multitasking Benchmark (which contains many types of VL tasks) for a fair and comprehensive comparison. Meanwhile, we retrain both the original LLaVA model and LLaVA with additional DINO and Pix2Struct encoders using the same datasets and settings as MoME. The results are summarized below:
| | Gen. | REC | REG | Doc. | Avg. |
| --- | --- | --- | --- | --- | --- |
| LLaVA-v1.5-7B (Original) | 69.78 | 46.40 | 71.85 | 22.06 | **52.52** |
| LLaVA-v1.5-7B (Retrained) | 75.04 | 61.42 | 58.79 | 30.84 | **56.52** |
| LLaVA-v1.5-7B (w/ DINO&PixStruct) | 70.36 | 74.89 | 57.55 | 32.83 | **58.91** |
| MoME-7B | 79.65 | 81.58 | 64.83 | 53.69 | **69.94** |
The table shows that **MoME (avg. 69.94 points) has significant advantages over the LLaVA model (Original avg. 52.52 points & Retrained avg. 56.52 points) on the multitasking benchmark.** Additionally, simply adding encoders to LLaVA (avg. 58.91 points) results in limited improvement and lags significantly behind MoME with the same vision encoders, training time, and nearly the same number of parameters. The substantial performance gains of MoME are achieved through the proposed ADT and Dynamic Router, which adaptively mitigate interference.
---
**Q2:** Visualization of MoLE.
**A2:** Since each LLM layer contains an independent MoLE block, there is no relationship between experts in different layers. Therefore, the statement that "E4 is used in most task types, while E1 is relatively less used" might be misunderstood, as E1 has different meanings across layers. Instead, from Fig. 5 we can see that the utilization of experts varies significantly across different tasks, indicating the specialization of language experts. For example, REC and REG tasks primarily use E2 in layer 12, while TabFact uses E1, and AOKVQA uses both.
Moreover, from Fig.8, we can observe that the routing results differ significantly among different task groups, while the routing preferences are similar within the same task group, which means different experts specialized in different task groups. We have included an excerpt of Fig. 8 in the rebuttal supplement material, from which we can see clear differences among text-rich (ChartQA - TextCaps), caption (COCOCap - Flickr30K), VQA (IconQA - GQA), REC, and REG tasks.
---
**Q3:** Multimodal benchmark results.
**A3: Our MoME focuses on multitasking ability and has advantages in benchmarks that contain diverse types of tasks.** However, recent multimodal benchmarks (e.g. MMBench, MME) are primarily organized in a VQA style with multiple choice formats, having a rather limited scope of tasks and instruction templates. For example, all the instructions in MMBench are single-choice with fewer than four options and exhibit high similarity, which does not align with forms of human expression.
In contrast, we evaluate MoME on a multitasking benchmark that contains many types of tasks with diverse instructions and more closely resembles human instructions in a real-world environment. As seen in Table of **A1**, MoME shows significant performance improvements over LLaVA original, LLaVA retrained on Multitasking Benchmark, and LLaVA with more visual encoders.
---
**Q4:** Design of MoVE (typical MoE)
**A4:** The MoVE module is designed to leverage powerful, off-the-shelf pre-trained vision encoders each specialized in a specific domain, which are more suitable for multi-modal large language models that are designed to be general and versatile.
Moreover, fine-tuning vision encoders was proven by [1] to be resource-consuming and often led to bad results on usually small-scale VL instruction datasets. Consequently, we chose to freeze the pre-trained vision encoders and regard them as vision experts.
[1] Wang, Guangzhi, et al. "What Makes for Good Visual Tokenizers for Large Language Models?."
---
**Q5:** Experiments supporting deformable cross-attention.
**A5:** We supplemented an ablation experiment by replacing the deformable cross-attention in ADT with cross-attention, which can be found in Table 1 of the rebuttal supplement material.
The table shows MoVE with cross-attention (Avg. 64.39 points) presents much worse performances than ADT (Avg. 69.39 points) consistently. We attribute it to the powerful inductive bias of deformable attention in processing 2D feature maps. We will add this experiment in the revised version.
---
**Q6:** Load balancing and instabilities of MoLE.
**A6:** We did not encounter instabilities in the training process of MoLE so the load balancing loss did not help. We infer that this is because the lightweight design of adapters makes the MoE block stable. The same phenomenon was also recorded in [1].
[1] Chen, Zeren, et al. "Octavius: Mitigating task interference in mllms via moe."
---
**Q7:** Novelty limited.
**A7:** The motivation, structure, and context of our method all have significant differences compared to previous works. “LIMoE” and “Scaling Vision-Language Models with Sparse Mixture of Experts.” both employ typical MoE into CLIP encoders to scale up and improve performances. In contrast, the experts of MoVE are different vision encoders each specialized in a specific domain. We proposed an effective way to make use of the powerful off-the-shelf pre-trained vision encoders and mitigate the interference among them. Moreover, we comprehensively explored task differences in both vision and language modalities and significantly enhanced the multitasking capability of multimodal large language models. Therefore, it can be argued that MoME is novel and provides many valuable insights.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. The reviewers addressed some raised points and added additional experiments supporting their approach. Based on their feedback I will increase my score.
---
Reply to Comment 1.1.1:
Comment: We are happy to hear that we've addressed your concerns, and we thank the reviewer again for the feedback! | Summary: The paper proposed a MoE design for MLLM, it utilize MoE in both visual encoding procedure and LLM decoding procedure. The paper utilized a dynamic routing module to mix visual features from different experts, and adopted a multi-adapter structure to combine the knowledge of differnent language experts. Widely conducted experiments showcase the effectiveness of the proposed method on various downstream tasks.
Strengths: The paper widely conducted quantitative results on several multimodal downstream tasks to prove the effectness of the proposed MoE design. The paper also adds qualitative results to intuitively showcase the performance.
Weaknesses: The design proposed in this paper seems to be not innovative enough. Its key design philosophy has already been brought out in many previous works. As for MoE in vision, the CVPR 2024 paper “Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs” combines CLIP and DINOv2 features to improve visual representations, the arxiv 2023 paper “SILC: Improving Vision Language Pretraining with Self-Distillation” combines the learning objectives of CLIP and DINOv2 for better pretraining outcome. Moreover, the references [12, 5, 8, 54] cited in this paper has already investigated MoE in LLMs, as stated in the introduction section.
It is already a consensus that MoE in MLLM, especially combining multimodal pretrained feature (such as CLIP) and pure visual pretrained feature (such as DINOv2), can improve model performance. Combining a third feature to improve performance (in this paper, the feature from Pix2Struct) besides CLIP and DINOv2 is a pure engineering design, which could not add up to the technical contribution of this paper. Moreover, the ‘Dynamic Router’ proposed in this paper is also a simple MLP network which is widely used for combining different feature. To sum up, it is hard to claim all these designs as a contribution of this paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the differnece between this work and the previous works discussing MoE in vision (the several papers I listed in the weaknesses section), how can this paper brough new knowledge or designs that is perviously unknown or not sufficiently expored by the community?
2. How does the MoE structure affects the inference speed, are there any quantitative results?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: The authors already adequaately addressed the potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** Key design philosophy has already been brought out in many previous works.
**A1:** The motivation and capabilities of MoVE are completely different from the previous work.
1. “**Eyes Wide Shut**” introduced an additional DINO encoder and chose to interleave visual features, which ended up with redundant visual features of twice the length. In contrast, we utilized the MoE technique to dynamically aggregate features and keep the token length unchanged, which is more efficient and adaptive.
2. “**SILC**” combined the learning objectives of CLIP and DINOv2 by self-distillation to improve image-text contrastive learning. It did not focus on multitasking nor did it use the MoE technique. In contrast, we concentrated on multitasking MLLMs, innovatively utilizing the MoE technique to leverage pre-trained vision encoders and effectively mitigate interference among them.
Therefore, we believe that our MoVE is a new and novel design that is distinct from previous works.
---
**Q2:** It is already a consensus that MoE in MLLM, especially combining multimodal pre-trained features (such as CLIP) and pure visual pre-trained features (such as DINOv2), can improve model performance.
**A2:** It is a consensus that combining different visual features can improve model performance. However, combining them in a MoE style has not been explored, which is not trivial. Existing MLLM works have demonstrated combining two or more vision encoders can significantly boost the visual perception ability of MLLM, but they just use simple addition or interleave [1]. In our MoVE, we aim to dynamically and adaptively aggregate these visual features according to task demands. Table 1 in the manuscript shows the significant improvement of our MoVE compared to simple addition.
[1] Tong, Shengbang, et al. "Eyes wide shut? exploring the visual shortcomings of multimodal llms."
---
**Q3:** Combining Pix2Struct features is a pure engineering design.
**A3:** Introducing Pix2Struct is not an engineering design. In contrast, it reveals several representative problems including aggregating features with different shapes and sizes and encoders focusing on different data domains. As stated in the manuscript, the aspect ratios of Pix2Struct feature shapes vary depending on the input image and it focuses on text-rich images. Experiments in Table 1 have shown that simply pooling and aggregating these diverse features can lead to severe interference. Instead, we proposed MoVE to adaptively transform and aggregate different kinds of features from CLIP (224x224), DINOv2 (448x448), and Pix2Struct (arbitrary shape) and achieved significant improvement. The ability to handle features of arbitrary shape makes MoME more versatile and valuable.
---
**Q4:** ‘Dynamic Router’ is a simple MLP.
**A4:** The ‘Dynamic Router’ is not just a simple MLP; its significance lies in its working mechanism rather than its internal structure. Its working mechanism allows MoVE to dynamically and adaptively aggregate visual features, greatly reducing interference and improving performance by over 6 points, as shown in Table 1 of the manuscript. Furthermore, existing work [1] has verified that the internal architecture has a negligible effect on performance.
[1] Ye, Qinyuan, Juan Zha, and Xiang Ren. "Eliciting and understanding cross-task skills with task-level mixture-of-experts."
---
**Q5:** Differences from previous works & New knowledge or designs.
**A5:** Compared to multi-modal large language models that are equipped with more than one vision encoder, MoME revealed the severe conflicts among visual features and proposed an efficient and novel method to tackle the interferences within both the transformation and aggregation process. Compared to works that simply employ typical MoE design into vision encoders, we deeply explore the dilemma of multi-modal large language models and propose a framework that can benefit from a variety of pre-trained models while avoiding interferences among them.
In this work, we highlighted the task interference problem in both the textual and visual information, while previous works only investigated MoE in LLMs and primarily concentrated on textual differences between tasks, overlooking the equally important visual information.
---
**Q6:** Quantitative results of Inference speed changes.
**A6:** For MoVE, the number of trainable parameters is 112.83M, accounting for 1.36% of the total parameters and increasing inference time by 1.076%.
For MoLE, we use lightweight adapters as our experts, which only increase the parameters by 0.668% and inference time by 5.959%.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' efforts to summarize their contributions. However, I still believe that the design philosophy of this work—mixture of vision expert—is pretty much similar to previous works. A simple adjust or compression in token length won't brought new knowledge to the model. Nevertheless, I appreciate the detailed explanation of the paper's design difference comparing to previous works, as well as the inference speed provided by the authors. Therefore, I would like to raise my score to 4.
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply. We would like to clarify some concepts again and sincerely hope you will reassess the innovation and contributions of our work.
1. The innovative aspect of this paper, the “mixture of vision experts,” differs significantly from previous works [1,2,3]. In the context of the availability of numerous pre-trained models, we have innovatively proposed a dynamic and adaptive approach to fuse vision encoders each specialized in a specific domain. To the best of our knowledge, the adaptive mixture of various pre-trained vision encoders has not been explored in the works of MLLM, as these works only consider addition or concat [4,5]. Prior to the era of MLLM, existing works [1,2,3] focused on applying typical MoE methods (experts are several identical sub-networks) within Vision Encoders and training from scratch, which is a completely different technical solution. **We acknowledge that our MoVE shares a similar design philosophy with previous works at a broad conceptual level (effectively utilizing many visual branches in a vision-language model). However, as mentioned above, the differences in technical details (e.g., the combination framework of vision branches, the feature processing method, and the design purpose) are also significant.**
2. We would like to argue against the contention that our ADT did not "bring new knowledge to the model." As stated in the manuscript and rebuttal, **one of the major issues in combining various pre-trained vision encoders is the misalignment among visual tokens**. This issue is caused by the differences in the pre-training setting and architecture of these vision encoders. Simply combining them (pooling and addition) will result in significant information loss. Thus, we propose Adaptive Deformable Transformation to mitigate the information loss by adaptively refining the pooled features. As shown in Table 1 of the manuscript and the table below, our ADT is very helpful and achieves **an average gain of almost 4 points**. To conclude, we believe that **our ADT is a promising method to resolve the discrepancies of various pre-trained vision encoders**, instead of “a simple adjust or compression in token length”.
| Strategy | Gen. | REC | REG | Doc. | Avg. |
| --- | --- | --- | --- | --- | --- |
| Pool + Add | 70.36 | 74.89 | 57.55 | 32.83 | **58.91** |
| ADT + Add | 74.35 | 76.93 | 61.01 | 39.23 | **62.88** |
| MoVE (ADT + Router) | 79.05 | 81.92 | 63.82 | 52.77 | **69.39** |
We hope that these explanations can address your concerns, and we would greatly appreciate it if you could consider giving us a higher rating.
---
**Reference**
[1] Naeem, Muhammad Ferjad, et al. "Silc: Improving vision language pretraining with self-distillation."
[2] Mustafa, Basil, et al. "Multimodal contrastive learning with limoe: the language-image mixture of experts."
[3] Shen, Sheng, et al. "Scaling vision-language models with sparse mixture of experts."
[4] Tong, Shengbang, et al. "Eyes wide shut? exploring the visual shortcomings of multimodal llms."
[5] Jiang, Dongsheng, et al. "From clip to dino: Visual encoders shout in multi-modal large language models."
---
Reply to Comment 1.1.2:
Comment: As the interactive discussion window is about to close, we sincerely invite the reviewer cdtp to read our follow-up response. We hope that our explanations effectively address your concerns, and we would appreciate it if you could consider revising your rating based on this information.
---
Rebuttal 2:
Title: Reminder to review the rebuttal
Comment: Dear Reviewer cdtp,
Thank Reviewer cdtp again for the valuable comments. We have provided the response to each of the concerns raised in the review, and we are eager to continue the conversation. As the interactive discussion window will close soon, we kindly invite the reviewer to read our response to see if there are any further questions.
Thank you!
Best regards,
Authors | Summary: In this paper, the authors introduce a mixture of multimodal experts (MoME) to reduce task interference and develop a generalist MLLM. MoME consists of two main components: a mixture of vision experts (MoVE) and a mixture of language experts (MoLE). MoVE can adaptively adjust features transformed from different vision encoders and boasts strong compatibility with various transformation architectures. MoLE integrates sparsely gated experts into LLMs, achieving seamless improvements while keeping inference costs nearly unchanged.
Strengths: 1. The analysis of task interference and mixture of vision experts in this paper is clear, highlighting the necessity of a vision mixture of experts
2. The experiments are well-conducted and quite comprehensive
3. The study demonstrates strong performance on most datasets compared with other generalist and MoE MLLMs
Weaknesses: 1. From Table 1, we can see that both ADT and Router have achieved notable improvements. Could you explain the internal mechanisms behind this? I am quite curious as to why the improvements are so significant.
2. In Table 3, many values are missing. Could you add some results from more general multimodal benchmarks to make the experiments more comprehensive, such as MME, MMbench, MM-Vet, LLaVA$^W$, Science$^{QA}$, etc.
Technical Quality: 3
Clarity: 3
Questions for Authors: One advantage of MoE is its fast inference speed. Could you conduct an experiment to verify the model's inference speed?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes, the author explains the limitations of their study and potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** Reasons for notable improvements in Table 1.
**A1:** When mixing visual representations from different vision experts, they are first transformed into a unified-length sequence of feature vectors and then aggregated, each step of which will severely damage the visual information.
- Transforming visual representations of different sequence lengths into unified lengths using downsample pooling will cause inevitable information loss since it’s rule-based and static.
- Visual representations are in different feature spaces due to the diversity in data domains and training methods. They will interfere with each other if simply adding them together and cause information loss.
The proposed ADT and router are dynamic and learnable and can effectively mitigate information loss in both steps:
- Transformation: The ADT module uses deformable attention to compensate for the information loss caused by downsample pooling.
- Aggregation: The dynamic router can adaptively aggregate features according to task demands, maximizing the retention of visual information appropriate to each task.
Consequently, the proposed MoVE achieved notable improvement compared to the baseline in Table 1.
---
**Q2:** General multimodal benchmark results.
**A2:** **Our MoME focuses on multitasking ability and has advantages in benchmarks that contain diverse types of tasks.** However, recent multimodal benchmarks (e.g. MMBench, MME) are primarily organized in a VQA style with multiple choice formats, having a rather limited scope of tasks and instruction templates. For example, all the instructions in MMBench are single-choice with fewer than four options and exhibit high similarity, which does not align with forms of human expression.
In contrast, we evaluate MoME on a multitasking benchmark that contains many types of tasks with diverse instructions and more closely resembles human instructions in a real-world environment. In the Table below, we compare MoME with LLaVA v1.5 on the Multitasking Benchmark. The original LLaVA model performs poorly on the Multitasking Benchmark since it is trained with a limited variety of tasks. To ensure a fair comparison, we retrain it using the same data and settings as MoME. While some improvements are observed, its performance remained significantly lower than ours. MoME exhibits a clear advantage in multitasking due to its effectiveness in mitigating conflicts.
| | Gen. | REC | REG | Doc. | Avg. |
| --- | --- | --- | --- | --- | --- |
| LLaVA-v1.5-7B (Original) | 69.78 | 46.40 | 71.85 | 22.06 | **52.52** |
| LLaVA-v1.5-7B (Retrained) | 75.04 | 61.42 | 58.79 | 30.84 | **56.52** |
| LLaVA-v1.5-7B (w/ DINO&PixStruct) | 70.36 | 74.89 | 57.55 | 32.83 | **58.91** |
| MoME-7B | 79.65 | 81.58 | 64.83 | 53.69 | **69.94** |
---
**Q3:** Inference Speed.
**A3:** The typical MoE in LLMs has a fast inference speed because of its sparse activation mechanism. However, there are other types of MoE that pursue performance improvements by mitigating the task conflict rather than efficiency [1,2]. Inspired by these works, we designed MoVE and MoLE for better performance with only a slight increase in parameters. Specifically, MoVE and MoLE result in just a 1.36% and 0.668% increase in parameters, and a 1.076% and 5.959% inference speed.
[1] Ye, Qinyuan, Juan Zha, and Xiang Ren. "Eliciting and understanding cross-task skills with task-level mixture-of-experts.".
[2] Zadouri, Ted, et al. "Pushing mixture of experts to the limit: Extremely parameter efficient moe for instruction tuning.".
---
Rebuttal Comment 1.1:
Comment: Thank the authors for addressing most of my concerns, I will keep my positive rating.
---
Reply to Comment 1.1.1:
Comment: We are happy to hear that we've addressed your concerns, and we thank the reviewer again for the feedback! | null | null | Rebuttal 1:
Rebuttal: We would like to thank all reviewers (R#1g6k, R#cdtp, R#ZwrT) for their time and efforts in providing constructive feedback. We are very encouraged that reviewers found our work effective (R#1g6k, R#cdtp, R#ZwrT), with a clear analysis of task interference (R#1g6k), comprehensive experiments (R#1g6k, R#cdtp), and insightful analysis (R#ZwrT). We have built an official repository for providing well-structured open-source codes (released upon acceptance).
We have responded to all questions and comments in each review. Additionally, supplement material is included in the attached PDF to help clarify related concerns. We hope these responses provide a more comprehensive view of our paper. Please kindly consider increasing your rating if your concerns have been addressed.
Pdf: /pdf/dde5e55ab8e770ddc5e8916ae9b0d154f414104b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Adaptive Sampling for Efficient Softmax Approximation | Accept (poster) | Summary: This paper introduces an efficient algorithm called AdaptiveSoftmax, designed to compute the top k outputs of the softmax function more effectively than the traditional full softmax computation. The key innovation lies in its adaptive approach, which leverages multi-armed bandit techniques to prioritize computations for the largest input elements, significantly reducing sample complexity. The paper provides PAC (Probably Approximately Correct) guarantees for AdaptiveSoftmax, demonstrating its theoretical soundness. Empirical results on both real and synthetic datasets, including the EuroSAT dataset, validate the algorithm's efficiency, showing substantial reductions in computational overhead, often achieving tenfold improvements or more. Additionally, the proposed method for estimating the softmax partition function offers potential applications beyond the current scope, such as in kernel density estimation.
Strengths: 1. The Adaptive Softmax algorithm significantly reduces computational overhead compared to the full Softmax computation. This is particularly beneficial in high-dimensional settings, making the approach practical for large-scale Machine Learning applications. The concept of focusing on the relevant top-k outputs is interesting.
2. The paper provides strong theoretical foundations with PAC guarantees for the AdaptiveSoftmax algorithm. These guarantees ensure that the method is not only empirically effective but also theoretically sound, offering reliable performance bounds. There have been extensive experiments on real and synthetic data. The authors also provide results on a variety of networks, as CNNs and LLMs.
Weaknesses: 1. The method relies on the assumption of a variance proxy bound for the sub-Gaussian parameters of the constructed estimators. While the paper discusses loosening this assumption, its practical implications and the extent to which it holds in various scenarios are not thoroughly explored, potentially limiting the algorithm's applicability in more varied or less controlled environments.
2. The paper does not compare quantitatively to other adaptive Softmax methods, like A
[A] Joulin, Armand, Moustapha Cissé, David Grangier, and Hervé Jégou. "Efficient softmax approximation for GPUs." In International conference on machine learning, pp. 1302-1310. PMLR, 2017.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Please discuss if the assumptions are a constraint for the environment, does it have an impact on the effectiveness of the algorithm?
2. It would be nice to see some quantitative comparison of the proposed method with other existing algorithms, eg., speed improvements.
Confidence: 1
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors have provided an extensive explanation of the limitations in the final section of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer wft8 for their careful reading of our discussion of empirical and theoretical improvements, and for highlighting this relevant reference. We discuss this paper, and the assumption of sub-gaussianity below.
1. **Sub-gaussian assumption:** this is a very fair question, which we discuss in detail in main response point 1. Briefly, assumptions are necessary to avoid $\Omega(nd)$ sample complexity, and empirically our assumptions are borne out. Folklore suggests that trained weights of large models are generally normally distributed, and we have verified this for the models examined in this paper (see attached Figure 1).
2. **Comparison:** following up on main point 2, no other methods provide theoretical guarantees (our PAC formulation in equation 4) aside from the naive baseline of exact computation, which we compare to. The method of the suggested paper "Efficient softmax approximation for GPUs" does not provide PAC guarantees, and only gives empirical wall clock speedups without any theoretical analysis of the number of samples needed, making it difficult to compare directly. At a high-level, the suggested method finds a learnable quantization (referred to as ``clusters'') of the matrix $A$, where the weights of the clusters are learned during training. Already, this poses a problem for inference since the vocabulary in the test set may be distributed differently than the training set for which the clusters are optimized. This falls into the concerns made by reviewer kEa9 of distributional shifts which our algorithm does not suffer from. We attempted to implement this method to compare against, but ran into some issues with their implementation. For example, note that the *forward* method (defined as a *torch.nn* module at [link](https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveLogSoftmaxWithLoss.html) requires as input *target* which are the output labels that are not available during inference time. In comparison, our algorithm includes the cost of approximating these labels (i.e. the best-arm identification) and does so efficiently while providing PAC guarantees.
---
Rebuttal Comment 1.1:
Comment: Thank you for the insightful discussion!
---
Reply to Comment 1.1.1:
Comment: Thank you for the swift response. We have incorporated yours and the other reviewers’ feedback and believe it has improved the quality of our paper. Further, it appears that the rebuttal has properly addressed the points raised in your review. If there are no further concerns, we would appreciate if your score were updated to reflect this. Please let us know if there is any additional information we can provide to help clarify. | Summary: This paper focuses on the efficient approximation of the softmax function. The authors propose an algorithm named AdaptiveSoftmax, which aims to reduce the computational cost of the softmax function in high-dimensional environments. Inspired by the multi-armed bandit problem, the algorithm adaptively allocates computational resources to important output values, efficiently calculating the top k values of the softmax function. It also provides PAC (Probably Approximately Correct) guarantees, demonstrating its effectiveness both theoretically and experimentally.
Strengths: Originality:
The AdaptiveSoftmax algorithm introduces a novel approach by adaptively allocating computational resources, addressing challenges in existing softmax approximation methods. Traditional approaches, such as using hierarchical models, reduce computational complexity but increase the number of internal classifiers, lacking accuracy guarantees. Methods leveraging label prior distributions require prior knowledge, limiting their applicability.
Quality and Presentation:
The paper provides detailed theoretical analysis and experimental validation, robustly supporting its technical claims. The presentation is clear, with a well-organized narrative from the review of related work to the proposal of the algorithm, its theoretical guarantees, and experimental results.
Significance:
The paper offers a novel solution to a significant problem in computing the softmax function. The AdaptiveSoftmax algorithm not only substantially improves computational efficiency compared to existing methods but also provides unique PAC guarantees. This algorithm holds potential for significant computational efficiency improvements in machine learning models dealing with high-dimensional data, potentially impacting a wide range of applications.
Weaknesses: In practical implementation, parameter tuning may be necessary. An ablation study on the sensitivity and justification of default hyperparameters would enhance the paper. Additionally, discussing scenarios where the proposed method may not be suitable would strengthen the paper. For instance, while the performance degradation due to approximation is shown to be limited on average, what about the worst-case scenarios, such as cases with significant data distributional shift?
Technical Quality: 3
Clarity: 4
Questions for Authors: How was the performance of AdaptiveSoftmax evaluated? For example, are the speedup metrics in Tables 1 and 2 theoretical values or execution time comparisons? If it is the latter, what benchmarks were used? PyTorch’s torch.nn.CrossEntropyLoss utilizes log_softmax and nll_loss, which are highly optimized for GPU performance. Has AdaptiveSoftmax been compared with these in practical settings on GPUs?
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors appropriately mention the limitations of the proposed algorithm, noting that it is most beneficial in high-dimensional settings. However, in practical settings involving LLMs, methods highly optimized for GPUs are usually used. Therefore, the question remains as to whether a fair comparison has been made regarding the current effectiveness of the proposed method in this setting.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer kEa9 for their in depth review: we wholeheartedly agree that this method of adaptive computation holds the potential for significant computational improvements across a wide range of applications. We respond to the specific questions below.
1. **Ablation study:** The theorems related to our algorithm’s sample complexity suggest minimal dependence on $\delta$, and slow growth with $\epsilon$ in the moderate $\epsilon$ regime, which is consistent with what we’ve observed experimentally. Following your suggestion, to better demonstrate and highlight this point, we have now added a table showing the sample efficiency scaling for a wide range of values of $\epsilon$ and $\delta$ in the Appendix (Rebuttal Table 1).
2. **Limitations:** As we discuss in the Limitations section of our paper, this method attains its largest gains in the high dimensional setting. However, as can be seen from our experimental results, even in smaller models like GPT2 ($d = 768$) our model exhibits over 7x improvement in sample complexity.
3. **Distribution shift:** Fortunately, our method does not suffer from these issues. It is designed to run at inference time and is truly instance-adaptive to the weights of the underlying model and the query. This is a great strength of our method, because as we discuss in main comment 2, many existing works such as the one referenced by Reviewer wft8 do in fact suffer from issues relating to distribution shift. Thank you for highlighting this point, we now add additional discussion to this end in our related works.
4. **Performance evaluation:** in this work, we use a proxy for FLOPS as our metric of comparison. This works by counting how many entries of the matrix $A$ the algorithm needs to observe (essentially, how many multiplications need to be performed). As this work focuses on providing a novel softmax approximation method with provable guarantees, with the goal of minimizing the number of entries of $A$ that need to be observed, this is the relevant metric to use to see that our algorithm is yielding the gains predicted by theory. The baseline that is compared to is brute force exact computation, which computes the entire $n \times d$ matrix vector product. Our algorithm requires many fewer samples across the wide range of settings and parameter regimes we tested.
5. **Comparisons:** The suggested methods all perform exact computation, and so are captured by the naive baseline in our comparisons. These methods are highly GPU optimized, implementing considerable hardware-conscious optimization, and so while their sample complexity is much worse than adaSoftmax, their wall clock complexity is better. Given the significant sample complexity improvement afforded by our method, we believe that improved wall clock performance is imminently possible, as we detail in Future Work. We discuss these details in points 2 and 3 of our main response.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer kEa9
Comment: I thank the authors for the clarifications, I have no further questions. I would like to keep my positive score. | Summary: The softmax function is a widely used tool, e.g., as an activation in the final layer of a classifier. Hence, cutting down on its computational costs can have a significant impact across the AIML field. This paper aims at this by introducing an adaptive variant of computing the softmax for the top $k$ values by estimating the normalization and the index of the likely highest value in the input by employing a multi-armed bandit setting. Furthermore, theoretical guarantees on the accuracy of the output of the adaptive algorithm are provided and can be controlled by an additional parameter $\delta$.
Strengths: - Considering how ubiquitous the softmax function is in AIML methods, this paper promises a significant impact. The reported gains in sample efficiency within both synthetic and real-world datasets further underpin this point.
- The method has been presented with good clarity and has a good level of originality. Creating more flexible computation models is an important research endeavor when tackling the growing resource demands by deep learning methods while maintaining a desired level of accuracy, which is honored in this work by allowing for fine-granular control of the trade-off between resources and accuracy by using the target error and failure probability parameters.
Weaknesses: - Overall, a more extensive evaluation with improved clarity is highly desirable. It would be important to see the effect of changing the $\epsilon$ parameter as well when conducting the evaluation, which is, to my understanding, set to a constant value of $30%$ throughout the experiments. Furthermore, reporting the effects of these changes could be made more comprehensive if presented in a plot with a wider range of choices for $\delta$, rather than just the three.
- As has been discussed in the limitations section, there can be a considerable trade-off between more involved, adaptive methods and easy-to-batch and parallelize brute-force computations. To my understanding, the experiments do not consider this trade-off, e.g., by reporting the wall-clock time next to the gains in sample efficiency.
Technical Quality: 3
Clarity: 2
Questions for Authors: I have no questions for the authors.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have addressed the limitations in a dedicated section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer qUNV for their helpful feedback regarding improving the exposition of the improved algorithmic performance of our method.
We respond to their two main points below:
1. **Varying parameters:**
The user-desired parameters can take a wide range.
We kept $\epsilon = 0.3$ (i.e. 30\%) constant across all simulations because we observed varying $\epsilon$ did not result in significant changes to the performance of AdaptiveSoftmax.
Further, we suspect that for most users, $\delta$ will fall in the range we considered: $0.1-0.01$ (equivalently, 90-99\%).
However, to assuage any concerns and verify our assertion that adaSoftmax is not sensitive to choice of $\epsilon$, we now run adaSoftmax with a much wider range of parameters on the MNIST dataset in Rebuttal table 1, and will add this figure to the appendix.
2. **Converting FLOP gains to wall-clock gains:**
As discussed in main point 2, our algorithm can be made minimally batch adaptive, to improve its parallelizability.
We further discuss in main point 3 the exciting line of future work towards converting the significant reduction in FLOPs achieved by AdaptiveSoftmax to improvements in wall clock speed.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifying responses to the reviews.
With the added insights from the rebuttal, I am happy to improve my rating. | Summary: This paper introduces an algorithm named AdaptiveSoftmax, designed to efficiently compute the top-k softmax values rather than the full softmax computation. This algorithm is particularly useful in high-dimensional settings where the computation of the full softmax function can be prohibitively expensive. The authors provide theoretical guarantees for the efficiency and accuracy of AdaptiveSoftmax and demonstrate its effectiveness through empirical results on real and synthetic datasets.
Strengths: a)The paper presents a novel approach to approximating the softmax function using adaptive sampling.
b)The authors provide probably approximately correct (PAC) guarantees for the performance of the AdaptiveSoftmax algorithm, which strengthens the credibility of their approach.
c)The paper includes extensive empirical results demonstrating the algorithm's efficiency on both synthetic and real-world datasets, including significant reductions in sample complexity compared to brute-force softmax computation.
Weaknesses: a)The algorithm relies on assumptions about the data distribution and variance proxies, which may not hold in all practical scenarios.
b)The adaptive nature of the algorithm introduces implementation complexity, particularly in balancing computational resources and ensuring efficient sampling.
c)While the algorithm is theoretically sound, its practical benefits are most pronounced in scenarios with very high-dimensional data. In lower-dimensional settings, the gains may be less significant.
d)It is better to give detailed explanation of each step in Algorithms. What is EstimateArm in Algorithm 1 step 8?
e)How sensitive is the algorithm's performance to the choice of parameters such as the temperature parameter?
f)Besides comparing with the full softmax computation, is there a detailed comparison with other approximate softmax algorithms?
Typo:
Line 5: “we present present an …” should be “we present an …
Line 135: “Our objective them becomes …” should be “Our objective becomes …”
Technical Quality: 3
Clarity: 2
Questions for Authors: a)See weakness
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: 8.The limitations is discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer qEfP for their insightful and detailed feedback.
We provide a point-by-point response to Reviewer qEfP’s comments below.
a) **Sub-gaussian assumption:** This assumption is minimally restrictive, and is borne out in practice.
We provide an in depth discussion in main response point 1, which we have added to the Appendix.
b) **Computational efficiency of adaptivity:** our algorithm provably obtains sample complexity improvements, and as discussed in main rebuttal point 3, there are several exciting next steps to a final hardware optimized implementation that realizes these wall clock gains.
On the theoretical side, there has been a surge of work over the last decade focused on developing multi-armed bandit algorithms with minimal rounds of adaptivity that still retain the same theoretical guarantees.
As we show in this work, normalization estimation can be accomplished in 2 rounds of sampling, best arm identification in $\log(1/\epsilon)$ rounds, and best arm mean estimation in 1 final round.
The round complexity of PAC best arm identification can be further improved, at the cost of worse dependence on the gaps ([13], Hillel et al.).
Main rebuttal point 3 provides additional discussion, and many of these notes can be seen in our publicly available and easily reproducible code base (see attached files, will be made public on GitHub after the paper is published).
c) **Dimensionality:** Our algorithm indeed displays its largest gains in the high dimensional regime, which is the natural regime future models will be focused on.
As shown by our numerical experiments though, these gains are still substantial for moderate $d$.
For example, considering GPT2 ($d=768$), one of the smaller language models in use, our approach still yields over 7x improvement in sample complexity.
d) **Algorithm clarity:** Thank you for the suggestion.
We were space-limited with the initial submission and moved the algorithm descriptions to the Appendix.
With the added page for the camera ready version, we will add these algorithmic descriptions back into the main text.
The function EstimateArm simply pulls the input arm to accuracy epsilon with probability at least $1-\delta$ (as in Lemma 2): we will clarify this and package the result as an algorithm in the final version.
e) **Temperature:** Temperature is treated as a fixed constant (fixed parameter for the problem at hand, not tunable by the algorithm).
This is because tuning the temperature fundamentally changes the problem.
With higher temperatures, the only arms that matter are the best and second best arms, and so adaptivity is extremely helpful.
At low temperatures, the output will be essentially the uniform distribution, and the computation is trivial and adaptivity unhelpful.
We will add a brief discussion clarifying this to the Appendix.
With respect to other parameters, the error probability and FLOP gains of AdaptiveSoftmax are insensitive to changes in $\epsilon$ and vary most with the choice of $\delta$.
We demonstrate this trend on the MNIST dataset in Rebuttal Table 1.
f) **Comparison:** As we discuss in overall point 2, there are no other approximate softmax algorithms that can provide PAC guarantees aside from the baseline of exact computation, which provides our point of comparison. We provide a qualitative comparison to a baseline without PAC guarantees suggested by another reviewer kEa9 (see **Comparison**).
g) **Typo:** Thanks you for catching this, fixed.
---
Rebuttal Comment 1.1:
Comment: Thank you again for the detailed review and helpful comments. In our rebuttal we worked to answer and incorporate, in a point by point manner, all the concerns and ideas that you raised. Please let us know if you have any additional questions or concerns regarding this manuscript, we would be happy to discuss them in the remaining 1.5 days of the discussion period. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their careful reading of our manuscript.
We were pleased to see that all reviewers appreciated the novel PAC guarantees provided by this work for efficient, instance adaptive softmax computation.
We have addressed all the comments and suggestions made by the reviewers, which has helped improve the quality and clarity of the paper.
We discuss some common points of concern below, and look forward to the upcoming discussion period.
1. **Assumption of sub-Gaussianity:** This is the only assumption that we make in this paper.
It is one of the weakest assumptions possible (does not assume that the arms are Bernoulli or Gaussian), and is a common assumption in the multi-armed bandit and adaptive computation literature [4].
Unfortunately, without such an assumption, no nontrivial results are possible; consider the case where we do not have preprocessing access to $A$, the vector $x$ is all ones, and $A$ is all $1$s except for one randomly selected entry which has value $2$.
In this case, any algorithm for PAC computation of softmax$(Ax)$ with $\delta < 1-1/n$ (even just identification of the largest entry of $Ax$) requires $\Omega(nd)$ samples.
More practically though, these vectors are the result of a machine learning pipeline, and not of adversarial construction.
As shown by our simulations, this worst case scenario never occurs in practice, and arm pulls are generally well approximated by a Gaussian (see Figure 1 in the attached pdf, now added to the Appendix).
Additionally, note that for any fixed problem instance (fixed matrix $A$ and $x$), all arm pulls are bounded, and are thus sub-Gaussian.
We have added this more detailed discussion to the Appendix, and have added a reference to it in the main text.
2. **Comparison:** While many algorithms have been devised for accelerating softmax computation, no existing methods, to our knowledge, provide $(\epsilon,\delta)$-PAC guarantees (as we formulate in equation (4)) save for exact computation.
Current approximations use some combination of Locality Sensitive Hashing (LSH) to cluster the vocabulary, truncation of the tail to approximate the normalization factor, and sketching to approximate the matrix multiplication.
However, these methods are not truly instance adaptive, and fall prey to many common flaws in machine learning pipelines. For example, reviewer wft8 references a method which learns a quantization of the matrix $A$ during training [2]. The method proposed in this paper has no theoretical guarantees, requires prior knowledge of the correct output label, and only adapts to the training data, not the actual data presented at inference. Since it focuses on learning a good quantization over the training data, it can suffer from potential distributional shifts of the training and test datasets (concerns raised by reviewer kEa9, that our method avoids). See $\textbf{Comparison}$ response to Reviewer kEa9 for more details.
3. **Converting FLOP gains to wall clock gains:** The focus of this paper is to develop the first provably adaptive softmax algorithm with PAC guarantees, highlighting its dramatically improved sample complexity across a wide variety of models and tasks.
The eventual goal of this method is to enable wall-clock improvements in hardware implementations.
We provided a brief discussion of this in the Limitations and future work section (Lines 333-352), but have now added additional discussion (below) there and to the supplement.
These next steps of converting our provably and empirically performant method into a hardware optimized wall-clock efficient algorithm is an exciting direction of future work, which we detail below.
In most modern-day transformer architectures, memory I/O serves as the primary bottleneck [1].
AdaptiveSoftmax already presents an opportunity to significantly scale down the number of entries of the matrix $A$ that must be loaded at inference time, and, in the future - if memory remains the bottleneck - improve model bandwidth by a similar factor.
This objective appears in reach, since we have designed the components of AdaptiveSoftmax to be amenable to tiling and parallelization.
Most notably, our implementation of AdaptiveSoftmax uses the same column to generate an importance-weighted sample for each active arm.
The reasons for this implementation decision are two-fold.
First, it takes advantage of the locality of entries in the same column to load samples faster, and, second, it removes intra-column correlation, which can yield theoretically improved performance [Baharav and Tse 2019].
Adjacent column samples can also be combined by simply summing their respective importance weights - admitting a simple tiling of our matrix $A$ that could easily be sized particularly to fit individual tiles into SRAM on a GPU along with a copy of the vector $x$ and the current mean/variance estimates for each arm.
Then, we can dynamically load these tiles into SRAM based on the arm estimates as we do currently.
The successive elimination bandits algorithm utilized by AdaptiveSoftmax is also, by choice, quite easily parallelizable.
We may also store two copies of our matrix $A$ — one with wider tiles and one with taller tiles — to take advantage of our tiling at all stages of the AdaptiveSoftmax algorithm: both when a larger number of samples is necessary for fewer arms, in later stages of adaptive estimation, and when a smaller number of samples is necessary for many arms, in earlier stages of the adaptive estimation.
This said, we observe in our experiments that the bulk of compute is invested in our early samples of many arms.
Just using basic parallelization to speed up this step could result in the desired speed improvements.
[1] Ivanov et al. "Data movement is all you need: A case study on optimizing transformers" 2021
[2] Joulin et al. "Efficient softmax approximation for GPUs." 2017.
Pdf: /pdf/f6fe36dd31866e0f383c57721f82f12433ffbdaa.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Designs for Enabling Collaboration in Human-Machine Teaming via Interactive and Explainable Systems | Accept (poster) | Summary: The paper proposes a white-box approach to human-machine teaming, in which human teammates can see their virtual counterparts’ policies and adjust them accordingly. The framework is built on top of differentiable decision trees, and the authors propose contextual pruning as a means of simplification for training purposes and readability by the human team members. They use this framework to quantify the benefits of interpretability in HMT and the tradeoff between high interpretability and high accuracy, the latter typically better achieved by traditional black box solutions. The authors use their framework to conduct a thorough statistical analysis comparing different white and black box conditions and use their findings to list a series of guidelines for HMT.
Strengths: - I enjoyed reading this paper, and I think the idea is novel, and the contribution of this paper to the field is significant. Interpretability is crucial in HMT if virtual agents are to be seen as real teammates instead of mere tools. Being able to understand these agents' policies and change them to agree with the human's beliefs is one step in that direction. The key point here is that the changes made by humans may not be the best ones (typically achieved by blackbox solutions), but they may be the ones that foster more collaboration between humans and machines.
- The paper is very well-written, and the ideas are very clear. The steps for reproducibility are detailed, and the statistical analysis is comprehensive.
Weaknesses: The authors propose a pruning strategy to simplify training and interpretability but provide no evidence supporting either. It would be helpful to validate such claims with time and memory usage assessment and IV1-C1 analysis with and without pruning. The authors briefly discuss the former in C.3.2, but it could be improved by displaying the numbers.
Technical Quality: 4
Clarity: 4
Questions for Authors: - In line 230, the authors state that contextual pruning significantly improves the ease of training. Are the two pruning strategies enough? I am unfamiliar with DDT, so I wonder if this is novel. How is this different than traditional pruning methods in DDTs?
- As noted by the authors in line 357, some participants outperformed the maximum performance of IVI-C5 in teaming iterations 3 and 4. I wonder if those participants show an increasing trend in performance from iteration 1 to 4? I cannot tell that from the scatter plot. I am asking this because I am curious to know if the improvements are due to randomness or if humans are applying a conscious strategy when modifying the agents’ policies, leading to incremental improvements at every iteration.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors discuss the limitations of their work in the Appendix. I would argue there's one more limitation:
It seems performance in the mixed-initiative scenario is highly dependent on presentation, as validated by better performance improvements in participants more familiar with Trees. This work shows that interpretability is also relative. What is interpretable to me may be convoluted to another, which may also affect the team's performance. It would be interesting to check for differences in IVI-C1 with different visualization methods in future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for noting that they enjoyed reading our paper, that the paper's contribution to the field is significant, and that our paper is well-written and the ideas are clear. We have responded below to the weaknesses and questions noted by the reviewer.
**Contextual Pruning Results and Novelty** -- The two post-hoc pruning strategies we utilize are to prune a tree given the boundaries of each state variable and removing redundancies within the tree. This need for pruning arises naturally during the training of the IDCT model via reinforcement learning as different subspaces of the tree model may become more important (higher probability of reaching certain sub-trees) or completely inactivated (impossible to reach a certain sub-space due to changes in weights caused by gradient descent). It is important to note that these pruning techniques follow a similar ideology behind neural network pruning but do not modify the model's behavior in any way. Neural network pruning approaches, on the other hand, often remove weights with smaller magnitudes or activations, which result in model changes that can harm performance.
These specific pruning strategies have not been applied to differentiable decision tree models before to improve interpretability. In training the IDCT model, we conducted a hyperparameter search over different tree sizes and chose the best-performing model post-pruning with under 32 leaves. In the end, we found that a 256-leaf IDCT model was most amenable to training in each domain, and contextual pruning was able to reduce this large tree into a simple tree of three leaves and two leaves, respectively (a 64-128x reduction in model size). This pruning technique is essential as trees of arbitrarily large depths can be difficult to understand [1] and simulate [2], and that a sufficiently sparse DT is desirable and considered interpretable [3].
[1] Abhishek Ghose and Balaraman Ravindran. Interpretability with accurate small models. Frontiers452
in Artificial Intelligence, 3, 2020.
[2] Himabindu Lakkaraju, Stephen H. Bach, and Jure Leskovec. Interpretable decision sets: A454
joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD455
International Conference on Knowledge Discovery and Data Mining, KDD ’16, page 1675–1684,456
New York, NY, USA, 2016. Association for Computing Machinery.
[3] Zachary C Lipton. The mythos of model interpretability: In machine learning, the concept of458
interpretability is both important and slippery. Queue, 16(3):31–57, 2018.
**Human Improvement from Iteration 1 to 4** -- In the rebuttal document, we have added in an additional display of the reward trajectories for each participant, with a single color assigned to each participant's behavior for each domain. These figures, while showcasing the trajectories of each participant well, can get
cluttered and thus we chose to showcase our findings via a scatter plot. In IV2-D1: Forced Coordination, you can see that many participants do not linearly increase from Iterations 1 to 4. Often, mistakes in tree logic can lead to an agent that does not collaborate well and as this domain requires collaboration, leads to a sub-100 score.
The improvement in IV2-D2: Optional Collaboration between iterations one and four was found to be significant (p<0.01). This implies that that in this domain, users were applying a conscious strategy when modifying the agents policies. However, even in this domain, the improvement was not necessarily monotonically increasing for all participants.
**Additional Limitation** -- We thank the reviewer for noting this additional limitation and will add this into our section.
---
Rebuttal Comment 1.1:
Title: I maintain my decision
Comment: Thank you for the rebuttal and additional results. I think this is a good paper. I maintain my decision. | Summary: In this paper the authors present an approach to human-AI teaming in the common Overcooked domain via Interpretable Discrete Control Trees (IDCTs), which are differentiable decision trees which the authors visualize and make controllable. The authors present two examples of where existing blackbox models may demonstrate a gap in teaming performance. They then present results of a human subject study that demonstrates that authors' approach is outperformed by a fictitious co-play baseline.
Strengths: Human subject studies studying human-AI teaming are still relatively rare due to their difficulty. As such, a new study is always beneficial and of interest to the community. The authors also include many variations of their approach, which is beneficial in terms of more deeply understanding the mechanisms of human-AI teaming.
Weaknesses: I like this work, but at present this paper has a number of key weaknesses holding it back.
First, the current text of the paper contains a large number of unsupported claims. Almost all of the text in italics represents claims that are not substantiated by an argument, a citation, or the results of the paper. There are similarly design decisions that are not motivated or explained, such as why the authors chose to use IDCTs.
Second, Section 3 introduces the possibility of a gap in teaming performance with current models. However, the evidence for this is presented as two examples. This is not sufficient to demonstrate that this specific approach has failure cases but not to demonstrate a need for the approach the authors' propose. I'd recommend working on a more full survey of existing Overcooked approaches to quantitatively evaluate the rate at which collaboration gaps may be occurring. This section represents the primary motivation for the authors' work, and so it being a weak point makes the whole paper weaker.
Third, the authors do not give full technical details for any of the approaches, not the approach of Carroll et al., nor their own ICDT approach, nor their implementation of fictitious co-play. This is an issue as it makes it very difficult as a reader to understand what has been done at a technical level. For example, it's unclear to me what size the ICDTs used in the study were, I only know that the users were limited to expanding the tree to a depth of 4. But this doesn't tell me if the initial tree had a depth of 3. Similarly, the pruning approach only seems to remove redundant nodes so it's unclear how much this would actually prune (the authors state 8-16x smaller but it's unclear to which trees they refer). Given that a major problem with the ICDT's performance in the user study was their performance, these details are crucial. While there's an anonymized GitHub in the appendices, it should not be necessary to go through it to understand the work.
Fourth, while the human subject study seems well-designed, certain details are unclear. It's unclear what population is being drawn on for the participants, though I would guess university undergraduates given the demographics. It's unclear how knowledgeable these participants were about AI except for one mentioned figure around familiarity with trees. It's unclear why the authors stopped recruiting at 50, since 10 per condition is considered an absolute minimum. Finally, the methodology is somewhat unclear, its unclear what order the participants experienced the domains. Was it consistent, if so, why? If not, there were 10 conditions, not 5.
Finally, the results do not seem to support the authors' claims. From the result, my takeaway is that participants preferred performance over interpretability. This goes against several of the authors' claims in the paper, most notably "As seen through these findings, the ability to interact with an interpretable model is perceived significantly better across several measures". The authors also make a claim about trending towards significance, which is not statistically sound.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. What was the size of the ICDTs used in the study?
2. Did participants experience the two domains in a fixed or random order?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The major limitations come from the poor performance of the ICDTs and the decision to limit the size of the human subject study. The authors do not sufficiently address these limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for noting that our study in Human-AI teaming would be beneficial to the community.
**Unsupported Claims and Design Decisions** -- As the field of Human-AI Teaming is still relatively new, some motivations are positional. For example, in considering lines 129-132, some may argue that AI agents and humans do not need consensus or that the human should always defer to the AI's strategy. In some domains, this may be effective. However, in domains where agents and humans need to collaborate closely, we believe the process of consensus will lead to better collaboration performance and understanding between teammates. This is validated in our human-subject study findings (see Figure 4b of the manuscript), where users that can interact with the AI's policy significantly improve over repeated gameplay. There are several works that display that social robots that can adapt online and develop social relationships achieve more successful and sustained interactions with users [1].
[1] Leite et al. Empathic robots for long-term interaction: evaluating social presence, engagement and perceived support in children.
The design guidelines are directly related to the results of our paper.
- Creation of white-box agents that achieve competitive initial performance to black-box agents: If model performance is equal, interpretable models provide increased accountability compared to black-box models. Importantly, these models with user interaction led to positive team development.
- Design of learning schemes to support the generation of collaborative behaviors: This is derived from the case study and analysis of Fictitious Co-Play. Designing objective functions so that agents collaborate well with humans and/or online adaptation schemes to lead to personalized, effective teammates is an important research direction.
- Creation of interfaces that enable mixed-ability users to improve team collaboration: This is based on system usability scores collected from users, emphasizing the need for creating modification interfaces that support a wider variety of users.
- Evaluation of teaming in a larger number of interactions: This was derived from our user study findings, where a higher # of interactions may have provided a better understanding of the team development process.
The design decision behind the IDCT is that this tree-based model affords transparency and supports training via RL. This allows us to directly compare with prior frameworks in Overcooked that leverage RL for training collaborative agents and provides us with an interpretable model that users can modify and visualize.
**Case Study regarding the Teaming Gap with Current Models** -- We agree that this case study could be expanded. This first study (Figure 1) is conducted with an actual human player following a scripted strategy while collaborating with an agent publicly available from Carroll et al. While a human could have conducted a larger number of trials with the agent, it was clear from the set of trials that the agent could not adapt to the human-preferred strategy.
The second study focuses on Ficticious Co-Play. A heuristic collaborative strategy and heuristic individual strategy are programmed, receiving scores of 408 and 306. Then, an FCP agent is trained, converging to a score of 295.06 \pm 1.86 over 50 teaming simulations. This FCP agent is also evaluated with humans in the study, achieving scores ranging from approximately 120 to 315 in the last teaming round (Figure 4b-right). These gameplay scores of FCP, both in teaming with synthetic agents and real humans, are far below a heuristic collaborative strategy, signifying the gap. A full assessment including other benchmarks and analyzing agent collaborativeness, both quantitatively and qualitatively, while interesting, would likely be a full paper in itself.
**Technical Details regarding the Approaches** -- The details regarding the IDCT approach are found in Appendix C alongside pictures of the agent models (Figure 6 and 7). In line 321, we note that the trained policies had two leaves in Forced Coordination and three leaves in Optional Collaboration. As mentioned in Appendix C.4, we train our IDCT models with 256 leaves. A reduction to a depth of one (2 leaves) or two (3 leaves) is a pruning reduction of 128x and 64x. We will update these numbers and shift these details into the main paper.
Details of the FCP baseline are also found in Appendix Section C.4. The AI-Led Policy Modification is an adaptation of Carroll et al.'s approach to an online setting with an IDCT. We provided high-level details about this approach in Line 279-284 and within the footnote on page 6. We will include further details about the online optimization procedure in the Appendix.
**User Study Detail Clarifications --** Our study was conducted at a university with a diverse population majoring in different engineering disciplines, economics, and sciences. All users had some college education and were enrolled at an engineering-focused university as an undergraduate or graduate student. Users were asked demographics information, experience with games and decision trees, and conducted a personality survey. This information was included when determining significant trends.
In future studies, it would be beneficial to increase the # of participants and gameplay trials. As noted in the Procedure, the domains are randomly ordered. Our experiment is a 5 (teaming method; between) x 2 (no. of domains; within) x 4 (no. of repeated evaluations; within) mixed-factorial experiment.
**Results Supporting Author Claims** - The statement noted was comparing conditions IV1-C1 to IV1-C2, IV1-C3, and IV1-C4. This statement meant that given the same tree-based model, the ability to interact adds subjective benefits. The reviewer is correct in noting that participants did assess higher-performing agents more positively in their subjective ratings. We will clarify this statement.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: Thanks to the authors for their detailed response. My concerns around the claims and the results not supporting them, remains unchanged and so I am choosing to maintain my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for reading our rebuttal and replying so quickly. We would love to improve our paper by adjusting the language around unsupported claims or better tying certain results to these claims. Above, in our reply, we discussed how our results directly relate to our guidelines. Are there any other specific unsupported claims you would be able to highlight so that we could improve our paper?
This would be much appreciated. Thank you in advance! | Summary: In this manuscript, it focuses on the collaboration in human-machine teaming (HMT) based on interactive and explainable systems. In order to address the existing issues, such as decoupling, the author(s) explored an interesting paradigm in HMT and proposed some guidelines based on the study. Also, the author(s) pointed out some potential research directions, although some of the conclusions are intuitive, but interesting.
Strengths: In my opinion, the strengths of this manuscript are as follows:
1. Demonstrating that current HMT approaches struggle to adapt to human-preferred strategies, often resulting in suboptimal team performance.
2. The proposed architecture enables end-users to modify AI behavior, facilitating a feedback loop for team development.
Weaknesses: In my opinion, the weaknesses of this manuscript are as follows:
1. As the author(s) mentioned, due to the population setting, the findings have a population bias and may not generalize to wider situations.
2. The cited work is a little out of date. This makes this manuscript seem not ready for publishing at conferences such as NeurIPS.
3. Some descriptions in the main text are not so clear.
For more details, please see the Section "Questions" below.
Technical Quality: 3
Clarity: 3
Questions for Authors: I read the manuscript, and I have the following questions/comments. Thanks.
1. We need to balance sparsity and stability in designing a learning algorithm. In this manuscript, the author(s) adopted L1 regularization; I am not so sure if it will cause some issues due to the instability of l1.
2. Regarding how users interact with AI across repeated play under different factors, the author(s) performed some human-subject-involved studies. About the population used in the study, can you discuss the implications of the population bias in more detail and how it might affect the generalizability of your findings?
Also, maybe sex/age-based difference analysis is also worth exploring.
3. If possible, could you explain more about the potential challenges in scaling the human-led policy modification approach to more complex and dynamic environments beyond the Overcooked-AI domain?
4. Regarding the citations, I do not think the current references are sufficient, at least, for a manuscript preparing to submit to computer conferences such as NeurIPS, since the newest citation was two years ago, and there are only 4 such 2022 citations.
5. In Line 58, maybe "reinforcement learning" should be with its abbreviation, not until Line 70.
6. It would be great to be with the full name for PPO, such as Proximal ... (PPO)
Some other format issues in references:
(1) The author's name, sometimes using the abbreviation, sometimes not, such as Ref.[1], Ref.[33].
(2) Sometimes, the conference name has both the full name and its abbreviation, and sometimes not, such as, Ref.[4] vs Ref.[5].
(3) In Ref.[17], "ai"=>"AI".
Please check carefully; it would be great if the author(s) could correct these issues.
I would like to consider adjusting my current score based on the responses from the author(s). Thanks.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. We have responded to the weaknesses and questions noted by the reviewer.
**Instability of L1** -- We thank the reviewer for this comment. The L1 regularization is only applied to the action leaf nodes of the tree policy. This regularization serves to make the categorical distributions within each leaf node more sparse so that the agent acts more deterministically. The mechanism used in our architecture to train an interpretable tree model via reinforcement learning is called differentiable crispification and was formulated by prior work (see [1]). From our observation, the L1 regularization successfully improved how deterministic the resultant tree-based policies were.
[1] Paleja, R., Niu, Y., Silva, A., Ritchie, C., Choi, S., & Gombolay, M. (2022). Learning interpretable, high-performing policies for continuous control problems. arXiv preprint arXiv:2202.02352.
**Population bias and how it might affect the generalizability of your findings? Also, maybe sex/age-based difference analysis is also worth exploring.** -- Our study was conducted at a university with a diverse population majoring in a variety of different engineering disciplines (Electrical and Computer, Biomedical, Aerospace, Chemical and Biomolecular), economics, and variety of sciences (computer science, neuroscience, atmospheric). All users within our experiment had some college education and were enrolled at an engineering-focused university. As such, these findings may not generalize beyond this population. In our data collection, we collected demographic information including age and sex. In our analysis, these variables were used and we did not find any significant trends that displayed that age or sex led to differences in Human-AI collaboration performance.
In generalizing our results to a broader population, we believe are findings can be augmented by other literature in human-AI interaction to design better interfaces for specific populations or domain-specific collaborative agents.
**If possible, could you explain more about the potential challenges in scaling the human-led policy modification approach to more complex and dynamic environments beyond the Overcooked-AI domain?** -- Thank you for this excellent question. In scaling to more complex and dynamic environments, the tree size needed to represent a high-performing agent will likely increase. In these cases, users may require more time to interact with and understand an agent's policy. There are also several capabilities that can be added to human-led policy modification which may make the process quicker and easier. For example, model verification can be added into the tree modification interface to detect problems with logic (detect regions of the tree that cannot be reached). This would allow the human to receive other types of feedback prior to teaming with the AI. For complex games, agent policies can also operate over different levels of abstraction providing the human with a tradeoff with fine-grained control of the agent policy and tree size. There may also be other more accessible mediums, such as language, that the human can use to program a large tree policy.
**Missing citations** -- We thank the reviewer for noting this deficiency. Within our literature review, we will add the following recent papers that are closely related to our work.
[2] Hong, Joey, Sergey Levine, and Anca Dragan. "Learning to influence human behavior with offline reinforcement learning." Advances in Neural Information Processing Systems 36 (2024).
[3] Wang, Chenxu, et al. "On the Utility of External Agent Intention Predictor for Human-AI Coordination." arXiv preprint arXiv:2405.02229 (2024).
[4] Guan, Cong, et al. "One by One, Continual Coordinating with Humans via Hyper-Teammate Identification."
[5] Tulli, Silvia, Stylianos Loukas Vasileiou, and Sarath Sreedharan. "Human-Modeling in Sequential Decision-Making: An Analysis through the Lens of Human-Aware AI." arXiv preprint arXiv:2405.07773 (2024).
**Abbreviations and Reference Format ** -- We have updated our paper language per your comments and updated our references to be consistent.
**Ethics Review Flag** -- As mentioned in Line 342 of our paper, our experiment was approved by a university institutional review board (IRB). All participants in our experiment signed a consent form, received description of the risks involved in our study, and received compensation.
---
Rebuttal Comment 1.1:
Title: Updating
Comment: Thanks for the responses from the author(s). The responses from the author(s) clarified some of my concerns to some extent. I increased my score from 4 to 5. | Summary: This paper focuses on developing strategies to enhance transparency and interpretability in human-AI teaming settings. Based on my understanding, two collaboration contexts have been considered: human-preferred collaboration and AI-preferred suboptimal teaming strategy. The authors have implemented specific strategies to tackle these contexts and address interpretability. The implemented strategies have been evaluated on the Overcooked game with real human users.
Strengths: The paper is well-motivated, particularly in its approach to learning human preferences and using reinforcement learning (RL) to train the agent.
The paper provides valuable insights into designing human-AI collaboration interfaces to enhance user trust and acceptance, addressing a timely and relatively unexplored domain.
Weaknesses: The main weakness of the paper is that it is very hard to follow, and the methodology lacks transparency. For instance, the paper does not present a single method to investigate explainability. Overall, the chosen modifications and the designed collaboration setups are not clearly presented. It is not straightforward to link Section 4 (Methodology) with Section 5 (Studies).
Similarly, the experimental results are very hard to follow. The numbers are presented but are not straightforward to interpret or use for cross-checking the claims. In particular, the evaluation metrics are not clear.
Additionally, only one collaboration environment has been considered. The findings might be biased toward this specific collaboration task.
Technical Quality: 2
Clarity: 2
Questions for Authors: Three main questions are:
- How do you envision improving the transparency of the proposed methodologies? How can Section 4 be better linked with Section 5?
- What are the evaluation metrics for the chosen collaboration settings? How is interpretability evaluated?
- Are there any other collaboration environments that can be considered for this line of research? How do you envision these findings being generalized to other collaboration tasks?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors discussed the limitations and the broader impact of their work adequately in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for noting that our paper is well-motivated, provides valuable insight into designing human-AI collaboration interfaces, and studies a relatively underexplored domain. We have responded to the weaknesses and questions noted by the reviewer.
**Comparison to Explainability Approaches** -- The reviewer is correct in that we do not compare against explainability approaches that utilize local explanations to explain the decision-making of autonomous agents. While these approaches should be tested to explain behavior in the tight human-AI collaboration settings we consider, local explanations can be misleading and may not accurately represent the agent's decision-making behavior. This would lead to another dimension that needs validation. As this area of research (transparency and adaptability of collaborative agents in repeated human-AI collaboration) is relatively new, we hope our research can spawn further studies of this kind.
**How can Section 4 be better linked with Section 5?** -- Section 4 presents an interpretable machine learning architecture to train collaborative AI teammates, Interpretable Discrete Control Trees (4.1), a training advancement to enhance interpretability, and a mechanism to allow humans to modify the tree in simple ways, including tree deepening, decision variable modification, and leaf node modification. The creation of this capability (interpretable tree architecture + human modification) is our main proposed condition, IV1-C1: Human-Led Policy Modification in Section 5.
The following conditions IV1-C2: AI-Led Policy Modification, IV1-C3: Static Policy - Interpretability, IV1-C4: Static Policy - Black-Box all utilize the architecture introduced in Section 4 but ablate the interaction and interpretability. We tried to depict this gradual feature reduction across our proposed conditions in Table 1. To improve the linkage between Section 4 and Section 5, we propose to create a new section between 4 and 5 that presents information regarding the training and results of the IDCT model. In creating this section to improve clarity, we would augment details from the author response alongside the paper lines 301-321.
**How do you envision improving the transparency of the proposed methodologies? --** To improve the transparency of the proposed methodologies, we have created a flow diagram (uploaded as Figure 1 in the attached rebuttal pdf) of each condition that helps display visually the experiment flow and interaction being assumed within the proposed approach. We have also included a diagram of how IDCT agents are generated as part of Figure 2 (left). Both of these figures will be added into the main paper as part of the appendix and alongside the new aforementioned section will improve the paper's clarity.
**What are the evaluation metrics for the chosen collaboration settings?** -- The main objective evaluation metric used to evaluate the performance of the collaboration is the game score. Section B of the appendix describes the exact scoring function that makes up the game score. In short, high score bonuses are obtained for full dishes served, and minor score bonuses are given for smaller objectives like filling a pot or picking up a dish. In the first domain, Forced Coordination, successful dish serving is not possible without resources being handed between the AI and human. In the second domain: Optional Collaboration, agents can serve dishes without explicitly collaborating with the human, but this domain was intentionally designed such that collaboration with the human via timely resource handoffs would result in a higher score than without collaboration.
**How is interpretability evaluated?** -- The interpretability of the model isn't evaluated explicitly. In our hyperparameter search for training the tree-based agent policy, we choose the highest-performing model with under 32 leaves. Prior work [1] notes that there is a cognitive limit on how complex a model can be while still being human-understandable and thus, we prioritized selecting a high-performing IDCT model that was still relatively small. We note that as conditions IV:C1-C4 utilize the same model, utilizing interpretability metrics such as tree size do not provide additional information in understanding the tradeoff between these conditions.
[1] Lakkaraju et al. Interpretable decision sets: A joint framework for description and prediction.
**Are there any other collaboration environments that can be considered for this line of research? How do you envision these findings being generalized to other collaboration tasks?** -- We believe research in repeated teaming with collaborative, transparent agents should be studied in more complex domains such as Minecraft, Dota 2, and Starcraft. Our work utilizes the relatively low-dimensional Overcooked setting where techniques like online optimization and tree manipulation by a human end-user can be done within a short time period. This is vital in allowing for a feasible single-session user study to be conducted.
However, many of our takeaways generalize beyond our single setting of Overcooked to the field of human-AI collaboration and also have applicability to real-world applications in collaborative robotics. In many settings, teams of agents and humans will need to go through stages of team development. Our work shows that model transparency and the ability to interact with policies help in this regard. While there may be domains where white-box models are already competitive with the initial performance of black-box models, there is still research needed to improve interfaces so that users can interact successfully with agent models. Within our domain, we found through a usability survey that there was a large disparity between users finding the interface good (>75) and bad (<35). This finding also generalizes to many domains that have human collaborators coming from different backgrounds and expertise levels.
---
Rebuttal 2:
Title: reply: Rebuttal by Authors
Comment: After considering all the feedback and responses, while I am not fully convinced of the paper's methodological strength, the idea and its execution are both original and solid. The paper offers insightful findings, which motivated me to raise my score. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their insightful reviews and valuable feedback on our paper. We have included a rebuttal document with additional figures as well as provided rebuttals to each reviewer below.
Pdf: /pdf/3b8a150973800409209615a357eccfb2ebee69af.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Principled Probabilistic Imaging using Diffusion Models as Plug-and-Play Priors | Accept (poster) | Summary: It is proposed to solve image inverse problems with diffusion priors by sliced Gibbs sampling (SGS). To sample from the product of prior and likelihood distributions, SGS assumes a pair of variables $x,z$ coupled by a Gaussian and alternates conditional sampling steps: on $x$ using the prior density multiplied with the conditional Gaussian and on $z$ using the likelihood multiplied with the conditional Gaussian. When the prior is a diffusion model, the two steps can be implemented by modifying the given pretrained score model and by Langevin dynamics (or exactly in the case of a linear problem), respectively. This procedure is evaluated on a few standard linear and nonlinear IPs and on a black home imaging problems, and the reconstructions found by the proposed method are typically visually and quantitatively better than the ones found by algorithms from prior work.
Strengths: - The writing is mostly clear; I have no complaints on the paper organization or exposition.
- In particular, the exposition of SGS in the general case followed by its application to DM priors is helpful.
- As far as I know, this is an original way of solving the diffusion posterior sampling problem.
- Strong results (if we are to accept that they support the claims) and application to a real imaging problem.
- Code is provided.
Weaknesses: - On fairness and strength of experimental comparisons:
- The results do not come with confidence intervals, which makes it hard to assess significance.
- I may have missed/misunderstood this, but are all methods using the same pretrained prior? Appendix D seems to suggest otherwise, which would be a problem (it isn't fair to compare posterior sampling methods with different priors).
- Similarly, different methods seem to be assuming different numbers of sampling steps and otherwise incompatible choices. How were hyperparameters, such as the $\rho_k$ annealing schedule, chosen?
- Computation cost is not sufficiently discussed.
- The method is run for at least a hundred Gibbs iterations in each experiment, and each one involves diffusion sampling initialized from some intermediate noise level. This makes the number of model evaluations quite large. It would be necessary to compare the number of function calls, as well as wall time, for all the methods.
- Convergence rate:
- Theorem 3.1 tells us the rate of convergence to the stationary process for a fixed $\rho$, but not how close the stationary marginal is to the true posterior. They are not equal, in general, for positive $\rho$, and the rate of convergence shown has a $\frac1\rho$ factor, which means the convergence guarantees with small $\rho$ are weaker.
- To understand the method, it would be important to show the dependence on the number of iterations, both by showing examples at different iterations in an illustrative case and by tracking the convergence of some metric at a function of the number of steps.
- Please also see questions below.
Technical Quality: 3
Clarity: 3
Questions for Authors: - After equation (1), writing $y=A(x)+n$ where $n\in\mathbb{R}^n$ means that $y$ depends deterministically on $x$. In fact $n$ should be a random variable taking values in $\mathbb{R}^n$, which makes $y$ also a random variable.
- I am surprised not to see any reference to the paper whose title is a suffix of this one's, ["Diffusion models as plug-and-play priors", NeurIPS'22], also abbreviated "PnP" in some subsequent work. There a stochastic optimisation is used to find modes of the posterior.
- Related to this, "PnP-DM" is not a very informative name for the proposed algorithm when so many of the baseline methods already have "PnP" in the name in some other configuration of letters.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, but it would be good to compare computation costs, see above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper. Below we provide point-by-point responses to your comments.
> Weakness 1.1
We provide the confidence intervals of all the results in Tables 1 and 2 in the attached PDF. Based on these results, we claim that our method achieves comparable or superior outcomes on most tasks, particularly for complex nonlinear inverse problems such as Fourier phase retrieval. Compared to the baseline methods, our standard deviation is slightly lower, indicating comparable and slightly better robustness of our approach.
To better show the significance of the improvement of our method over baselines, we further conducted a sample-wise PSNR comparison between PnP-DM (EDM) and the most competitive baseline DPnP. The table below shows the sample-wise improvement rate PnP-DM over DPnP and $p$-value of the one-sided t-test for each inverse problem. Note that our method improves on *all samples in the test set* and the $p$-values indicate high statistical significance.
**Response Table 2:** Sample-wise comparison between our method and DPnP [2].
| | Gaussian deblur | Motion deblur | Super-resolution | Coded diffraction patterns | Fourier phase retrieval |
|------------------|-----------------|---------------|------------------|----------------------------|-------------------------|
| Improvement rate | 100% | 100% | 100% | 100% | 100% |
| t-test $p$-value | 8.18e-8 | 1.15e-6 | 9.40e-11 | 3.43e-70 | 2.28e-2 |
> Weakness 1.2
As mentioned in Page 7 line 244-246, we used the same model checkpoints for all diffusion model-based methods, which includes DDRM, DPS, PnP-SGS, DPnP, and our PnP-DM. This suggests that these sampling methods should share the same target posterior distribution.
For our method, we fine-tuned the parameters in the annealing schedule $\rho_0$, $\alpha$, and $\rho_\min$ (notations from Appendix C.3) by performing a grid search over 20 FFHQ images outside of the test set we used for final reporting. Please also see the response to Question 2 by Reviewer u5tS.
> Weakness 1.3
Please refer to the response to all reviewers above for a comparison on computational efficiency. To clarify, as we mentioned in Appendix C.2 "Pseudocode" paragraph, the total number of time steps for the *entire* diffusion process is 100, but we only start running from some intermediate noise level. Therefore, the number of NFEs for each iteration is less than 100. In fact, according to the table, the average number of NFEs is only around 30 per iteration given the current annealing schedule.
> Weakness 2.1
Although we agree that there is a gap between the stationary process and the true posterior, we would like to argue that the convergence guarantees with small $\rho$ are not weaker and that the $\frac{1}{\rho}$ factor is in fact expected. For SGS-base methods, $\rho$ can be interpreted as the "step size." The step size has a similar presence in the bound of prior non-asymptotic analysis on the convergence of Langevin-based MCMC algorithm [8], [9]. For these algorithms, it is common to have an $O(1/T)$ convergence rate where $T$ is the total diffusion time and scales with the step size, so there will be a $\frac{1}{\text{step size}}$ factor on the right hand side. This also justifies the annealing strategy, which starts with large $\rho$ (hence large step size) at the beginning when the distribution is far away from the target and gradually decreases.
> Weakness 2.2
Thank you for your suggestion. In Figure 2 of the attached PDF file, we show some visual examples of intermediate $\boldsymbol{x}$ and $\boldsymbol{z}$ iterates (left) and convergence plots of PSNR, SSIM, and LPIPS for $\boldsymbol{x}$ iterates (right) on the super-resolution problem. As $\rho_k$ decreases, the $\boldsymbol{x}$ iterate becomes closer to the ground truth and the $\boldsymbol{z}$ iterate gets less noisy. Both the visual quality and metric curves stabilize after the minimum coupling strength $\rho_{\min}$ is achieved. Despite being run for 100 iterations in total, our method generates good images in around 40 iterations, which is around 30 seconds and 1600 NFEs. We can include these results in our supplemental material of the revised manuscript.
> Question 1
We will change the writing on this part in the final version of the paper.
> Question 2
Thank you for bringing this paper to our attention. We notice that [one reviewer's comment](https://openreview.net/forum?id=yhlMZ3iR7Pu¬eId=IRfkHFMGG6#:~:text=for%20downstream%20tasks.-,The%20naming%20of%20the%20framework%20is%20confusing.%20Plug%2Dand%2Dplay%20priors%20(PnP)%20is%20a%20well%2Dknown%20framework%20in%20the%20literature%20of%20image%20restoration%20and%20imaging%20inverse%20problems.%20The%20title%20confuses%20me%20in%20the%20first%20place.,-The%20naming%20is) of this work also touches on the naming of the method. According to the authors' response, this happens to be a clash of terminology. However, we will include this paper in the related work. As for the "PnP-DM" name, we understand that "PnP" is so commonly used that it may cause some ambiguity. We used it mainly to highlight its similarity to deterministic PnP methods that incorporate an image prior via denoising.
---
Rebuttal Comment 1.1:
Comment: I'm happy to recommend acceptance given the clarifications. Thank you. | Summary: This paper proposes a novel posterior sampling algorithm for solving inverse problems with Diffusion models. The proposed algorithm is based on the Split Gibbs sampling scheme and consists in, first introducing an extended distribution $\pi_\rho(x, z)$ that admits the original posterior $\pi$ as $x$ marginal after letting $\rho$ tend to $0$. Then, for a fixed $\rho$ this joint distribution can be sampled via Gibbs sampling; the form of the joint distribution $\pi_\rho(x, z)$ allows sampling sampling both $X$ and $Z$ easily. For $Z$, this can be done exactly in the case of linear inverse problems, or with Langevin Monte Carlo otherwise. For $X$, the particular structure of the joint distribution is exploited and as it turns out, it can be sampled by simply running the learned backward Diffusion. Notably, this is the only step that requires using the learned denoiser. The theoretical properties of the ideal sampler are inverstigated and extensive experiments are considered.
Strengths: I highly enjoyed going through this paper and I found it to be interesting. The strengths are:
- The method is original and clever; the joint distribution introduced, which stems from the SGS approach, allows almost exact
sampling of both the conditional distributions. Furthermore, the considered framework allows having actual theoretical guarantees for the sampler (although in an ideal case). This is still quite informative and contrasts with existing approaches (besides the SMC-based ones which also allow more straightforward theoretical guarantees). The main perk of this method wrt to the latter is that it has theoretical guarantees while not requiring absurd memory requirements due to the storage of a large number of particles.
- The writing is extremely clear. The paper is written in a way that It is straightforward to understand while still being quite detailed. This is a significant perk (Still, i think that section 3.4 could be improved as I had to go through the supplementary material to actually understand what the proof technique was).
- Finally, the experiments are very good; the authors first consider a toy example in which the posterior has a closed form. This serves as a nice introduction to the experimental section and showcases the perks of the present method in comparison with DPS. Next, the imaging experiments are rigorously executed, with a precise look at the uncertainty quantification. Finally, an original black hole experiment is considered and I found it to be very interesting.
Weaknesses: - **Computational cost**: I believe that the proposed method is much slower than existing alternatives. This is not that much of an issue for me (but of course this depends on the reader and practitioner). In my opinion the compute time is not properly addressed; although the authors discuss it in the bottom of page 21, I would like to see a proper runtime comparison with existing methods. I would like to emphasize that this is only a minor weakness.
- **Originality**: I believe that the proposed method is related to SDEdit [1]; it can be thought of as a variant of SDEdit. Indeed, to see why this is the case, when the present algorithm is applied to inpainting, the first iteration, the likelihood step, will result in a noisy image with the observation already present In it. This image is then denoised using the backward process during the prior step. This is in fact very similar to sdedit which starts with some initial image, say the pseudo-inverse of the observation, noises it to some given noise level and denoises it using the backward Diffusion process. See Algorithm 3, [1]. I believe that the main difference with the algorithm presented in your paper is the use of the decreasing noise schedule, which makes sense theoretically. For a fair treatment I suggest you mention this in your paper.
[1] Meng, C., He, Y., Song, Y., Song, J., Wu, J., Zhu, J.Y. and Ermon, S., 2021. Sdedit: Guided image synthesis and editing with stochastic differential equations.
Technical Quality: 4
Clarity: 4
Questions for Authors: see above
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: see above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your evaluation of our work. Below we provide point-by-point responses to your comments.
> **Computational cost**: I believe that the proposed method is much slower than existing alternatives. This is not that much of an issue for me (but of course this depends on the reader and practitioner). In my opinion the compute time is not properly addressed; although the authors discuss it in the bottom of page 21, I would like to see a proper runtime comparison with existing methods. I would like to emphasize that this is only a minor weakness.
We provide a comparison on computational efficiency for all methods in Response Table 1 above. Our method PnP-DM achieves superior performance while remaining comparable in runtime with DPS, with a runtime no more than 1.5 times that of DPS. In fact, for the linear inverse problem experiments, PnP-DM was faster than DPS because we were able to take multiple samples along one Markov chain generated by PnP-DM but DPS requires running multiple diffusion processes.
> **Originality**: I believe that the proposed method is related to SDEdit [7]; it can be thought of as a variant of SDEdit. Indeed, to see why this is the case, when the present algorithm is applied to inpainting, the first iteration, the likelihood step, will result in a noisy image with the observation already present In it. This image is then denoised using the backward process during the prior step. This is in fact very similar to sdedit which starts with some initial image, say the pseudo-inverse of the observation, noises it to some given noise level and denoises it using the backward Diffusion process. See Algorithm 3, [7]. I believe that the main difference with the algorithm presented in your paper is the use of the decreasing noise schedule, which makes sense theoretically. For a fair treatment I suggest you mention this in your paper.
Thank you for bringing this work to our attention. This work indeed shares some similarity to our own in that it employs a process of noising followed by denoising. Nevertheless, our method is more theoretically grounded and meant for rigorously estimating the posterior of an inverse problem, whereas SDEdit is more empircally driven and designed for solving image synthesis/editing problems. We will cite and discuss [7] in the related work section in the final version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
Reviewer u5tS mentions that this work bears significant similarities with the work of Coeurdoux et al. and I agree that this is the case with the main difference being that they use a constant $\rho$, which is also more or less the main difference between your work and that of [1] that I am mentioning in my initial review. While reading the paper I did not notice these similarities with the work of Coeurdoux et al., this means that paper in its current form does not adequately address the related work and might mislead the reader into thinking that the paper is much more novel than what it actually is.
Nonetheless, the empirical results are promising and show that the decreasing schedule is indeed a good idea. I maintain my score but i strongly recommend the authors to modify their paper and clearly state the similarities with previous works.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We will provide additional clarifications on the similarities and differences between our work and prior works in the final version. Please also see [responses W1.1--W1.3](https://openreview.net/forum?id=Xq9HQf7VNV¬eId=O9TzVFCQmB) to Reviewer u5tS for more information on other differences. | Summary: The paper introduces a method to address Bayesian inverse problems in computational imaging. It leverages the generative capabilities of diffusion models (DMs) to sample the posterior distribution over all possible solutions from noisy and sparse measurements. The method combines a Markov chain Monte Carlo (MCMC) algorithm with a general DM formulation to perform rigorous posterior sampling, effectively integrating state-of-the-art DMs as expressive image priors. The method deviates from current methods that rely on approximating the intractable posterior via separating the forward operator from an unconditional prior over the intermediate noisy image. The approach is validated on six inverse problems, demonstrating superior accuracy and posterior estimation compared to existing DM-based methods.
Strengths: - The paper presents a novel combination of MCMC and diffusion models to solve inverse problems. This use of DMs as priors in a Bayesian framework can be integrated into SOTA diffusion models as plug-n-play expressive image priors for Bayesian inference.
- The presented method is rigorous. The use of the Split Gibbs Sampler and the EDM formulation for the prior step is well-executed. The theoretical insights provided, including the stationarity guarantee in terms of average Fisher information, add depth to the method.
- The paper is well-structured and supported by diagrams and pseudocode to enhance understanding. The distinction between existing methods and the proposed approach is clearly articulated.
- Experiments show the efficacy of the proposed method compared to existing methods on a diverse set of real-world problems, highlighting its potential broader impact on computational imaging applications.
Weaknesses: - While the method demonstrates superior performance, the computational cost and efficiency are not thoroughly discussed. A comparison of computational resources required compared to other methods would be beneficial.
The impact of various parameters, such as the annealing schedule for the coupling parameter ρ, on the method's performance is not deeply investigated. A sensitivity analysis could provide insights into the method's robustness.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Can the authors compare the computational resources and time required for PnP-DM versus existing DM-based methods?
2. How sensitive is the performance of PnP-DM to the choice of the annealing schedule for the coupling parameter ρ? Is there an optimal range or strategy for selecting these parameters?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback on our work. Below we provide point-by-point responses to your comments.
> While the method demonstrates superior performance, the computational cost and efficiency are not thoroughly discussed. A comparison of computational resources required compared to other methods would be beneficial. Can the authors compare the computational resources and time required for PnP-DM versus existing DM-based methods?
We provide a comparison on computational efficiency for all methods in Response Table 1 above. Our method PnP-DM achieves superior performance while remaining comparable in runtime with DPS, with a runtime no more than 1.5 times that of DPS. In fact, for the linear inverse problem experiments, PnP-DM was faster than DPS because we were able to take multiple samples along one Markov chain generated by PnP-DM but DPS requires running multiple diffusion processes.
> The impact of various parameters, such as the annealing schedule for the coupling parameter ρ, on the method's performance is not deeply investigated. A sensitivity analysis could provide insights into the method's robustness. How sensitive is the performance of PnP-DM to the choice of the annealing schedule for the coupling parameter ρ? Is there an optimal range or strategy for selecting these parameters?
Following your suggestion, we have included a sensitivity analysis on the annealing schedule. In Figure 1 of the attached PDF file, we show the PSNR curves of different exponential decay rates $\alpha$ (left) and minimum coupling levels $\rho_{\min}$ (rate) for one linear (super-resolution) and one nonlinear (coded diffraction patterns) problem.
We have the following conclusions based on the results. First, different decay rates lead to different rates of convergence, which corroborates with our theoretical insights that $\rho$ plays the same role as the step size. The final level of PSNR is not sensitive to different decay rates, as all curves converge to the same level. Second, as $\rho_{\min}$ decreases, the final PSNR becomes higher as the stationary distribution converges to the true target posterior.
Our strategy for choosing the annealing parameters is as follows. The starting coupling strength $\rho_0$ should be large to overcome the ill-posedness of the problem (usually around 5 to 10 is sufficient). The minimum coupling strength $\rho_{\min}$ should be small to ensure that the stationary distribution is close to the target posterior; empirically around 0.1 to 0.3 works the best. The range of decay rate $\alpha$ is generally flexible; a value around 0.9 usually leads to good results.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification and additional experiments. This makes sense to me. | Summary: This paper proposes a Markov Chain Monte Carlo algorithm for posterior sampling in both linear and non-linear inverse problems. The core of the proposed method is based on a Split Gibbs Sampler that alternates between two steps: one involving the likelihood and the other the prior. Additionally, the paper connects the Bayesian denoising problem with unconditional generation using Diffusion Models. The proposed method is validated on a range of linear and non-linear inverse problems, with an additional real-world application in black hole imaging.
Strengths: - The proposed method outperforms DPS, with satisfactory evidence provided by the authors.
- Unlike some previous works, the proposed method is effective for both linear and non-linear inverse problems.
- The paper provides sufficient theoretical analysis to support the proposed method.
Weaknesses: There are no apparent weaknesses. However, I am curious about the reconstruction speed comparison (seconds/image) between DPS and the proposed method. It would be practically very attractive for the community to use this algorithm if a rigorous comparison of reconstruction speed is made.
Technical Quality: 4
Clarity: 4
Questions for Authors: No questions.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: No limitations other than those mentioned in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your evaluation of our paper. We respond to your comment below.
> There are no apparent weaknesses. However, I am curious about the reconstruction speed comparison (seconds/image) between DPS and the proposed method. It would be practically very attractive for the community to use this algorithm if a rigorous comparison of reconstruction speed is made.
We provide a comparison on computational efficiency for all methods in Response Table 1 above. Our method PnP-DM achieves superior performance while remaining comparable in runtime with DPS, with a runtime no more than 1.5 times that of DPS. In fact, for the linear inverse problem experiments, PnP-DM was faster than DPS because we were able to take multiple samples along one Markov chain generated by PnP-DM but DPS requires running multiple diffusion processes.
---
Rebuttal 2:
Comment: Thank you for sharing the sampling speed results. The shared results make sense to me. I strongly feel that this paper provides novel enough contributions to be considered for acceptance. I have raised my confidence score to 4. Hope this helps AC decide about the paper. | Rebuttal 1:
Rebuttal: ## **Response to all reviewers**
We thank all the reviewers for their careful reviews and constructive feedback. We are glad that our method was recognized as "rigorous", "well-executed" (reviewer G2Ty), and “unlike some previous works...effective for both linear and non-linear inverse problems” (reviewer G64m), including “an application to a real imaging problem” (reviewer 43sJ). We are also encouraged that the reviewers found our paper "well-written" (reviewer u5tS) and the writing "extremely clear" (reviewer TmFe).
In the responses below, we address the reviewers' comments individually. Additional experiments are presented in the attached PDF file.
--------------------------------------------------------------------------------
### **Common response to computational efficiency**
We present a comparison of computational efficiency with the major baselines on a linear super-resolution and a nonlinear coded diffraction patterns problem in Response Table 1 below. The clock time in seconds and number of function evaluations (NFE) are calculated for each method to measure its computational efficiency. All hyperparameters are kept the same for each method as those used for Table 1 and Table 2 in the manuscript.
As expected, DM-based approaches (DDRM & DPS) generally yield shorter runtimes due to their lower NFEs. Nevertheless, our PnP-DM method significantly outperforms these methods while achieving comparable runtimes with DPS ($\approx 1.5\times$), despite its larger NFEs ($\approx 3\times$). This is primarily due to two factors: 1) PnP-DM avoids running the full diffusion process by adapting the starting noise level to $\rho_k$ at each iteration, and 2) the runtime is further reduced by using an annealing schedule of $\rho_k$.
We also note that the runtime reported for DDRM and DPS below is the time it takes to generate one sample. For the linear inverse problem experiments, where we generated 20 samples for each sampling method, PnP-DM was faster than DPS because we took 20 samples that PnP-DM generated along one Markov chain of batch size 1 (hence same runtime as below, around 50 seconds) but DPS requires running a diffusion process with batch size 20, which was significantly slower (around 330 seconds).
**Response Table 1**: Comparison of runtime in seconds and number of function evaluations (NFE) for our method and baselines.
| | Metric | DDRM | DPS | PnP-SGS | DPnP | PnP-DM (ours) |
|----------------------------|----------------|------|-----|---------|-------|---------------|
| Super-resolution | Clock time (s) | 0.4 | 39 | 20 | 322 | 55 |
| | NFE | 20 | 1000 | 1030 | 18372 | 3032 |
| Coded diffraction patterns | Clock time (s) | -- | 37 | 54 | 261 | 50 |
| | NFE | -- | 1000 | 2572 | 14596 | 2482 |
--------------------------------------------------------------------------------
### **References for all responses**
[1] Coeurdoux et al., 2023. Plug-and-play split Gibbs sampler: embedding deep generative priors in Bayesian inference.
[2] Xu and Chi, 2024. Provably robust score-based diffusion posterior sampling for plug-and-play image reconstruction.
[3] Karras, et al., 2022. Elucidating the design space of diffusion-based generative models.
[4] Chen et al., 2023. Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions.
[5] Yuan et al., 2023. On a class of Gibbs sampling over networks.
[6] Zhang et al., 2022. Plug-and-play image restoration with deep denoiser prior.
[7] Meng, C., He, Y., Song, Y., Song, J., Wu, J., Zhu, J.Y. and Ermon, S., 2021. SDEdit: Guided image synthesis and editing with stochastic differential equations.
[8] Balasubramanian et al., 2022. Towards a theory of non-log-concave sampling: first-order stationarity guarantees for Langevin Monte Carlo.
[9] Sun et al., 2023. Provable probabilistic imaging using score-based generative priors.
Pdf: /pdf/082bdde1e1fa87c2e12ce3e5f3f9250b6b21c22e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: In this paper, the authors treat the problem of sampling from posterior of image inverse problems. Their formulation is based on the Split Gibbs Sampler (SGS) algorithm, which alternates between sampling from Moreau regularized versions of the prior and of the likelihood. The main contribution of the paper is to use the EDM formulation of diffusion models for sampling the prior term. The EDM formulation was
proposed to unify various diffusion models in a common general formulation.
Strengths: Firstly the paper is well-written, provides clear and concise explanations of the methods and contributions. The authors have also included some theoretical insights that guarantee the convergence of each sampling step of the algorithm to the right stationary distributions when the number of iterations tends to infinity.
The strength of the paper is mainly its experimental section, which covers a range of both linear and non-linear inverse problems. Additionally, I find interesting the inclusion of a toy experiment with a Gaussian prior, which allows comparing to ground-truth posterior distributions.
Weaknesses: 1. **Significance of Contribution**: My primary concern is the overall significance of the contribution. When compared to the works of Coeurdoux et al. (2023) and Xu & Chi (2024), the advancements appear to be relatively minor. Specifically, the contribution seems to involve using a more general formulation of diffusion models (EDM) instead of the more specific DDPM or DDIM. Are there additional conceptual differences? Furthermore, given this similarity, what accounts for the significant performance disparity with these methods?
2. **Algorithm Efficiency**: The algorithms seem to require considerable computational time, necessitating the diffusion process to be run at each iteration. While this is mathematically sensible, it appears excessive. Given the annealing process chosen, how many calls to the denoiser are typically made in practice? Additionally, how does this number compare to methods that perform a single diffusion using an approximation of p(y|x_k), such as DPS? Including a comparative table in the paper would be valuable.
3. **Theoretical Results**: The theoretical results presented, while interesting, are somewhat limited as they assume a fixed rho parameter, which is not the case in practical applications. Additionally, the algorithm is presumed to run with the true score, which is also not realistic. In addition, although it may be beyond the scope of the current paper, providing theoretical insights into the convergence of the proposed Gibbs Sampler (with approximate score) to the true posterior would be beneficial.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In Figure 4, the diffusion process is represented in a closed-form. Did you train a denoiser, or did you use the closed-form score instead?
- How did you tune the hyperparameters for your method and for each method in the comparison? Did you use their default hyperparameters? For example, DPIR provides hyperparameters fine-tuned for various inverse problems. However, the SR x4 task is not one of the inverse problems for which DPIR's authors fine-tuned their hyperparameters, likely making the default hyperparameters sub-optimal for this task.
- Did you employ any regularization parameters to balance the data-fidelity and regularization terms?
- For the backward diffusion process, did you use the same number of iterations for all values of $\rho_k$ ? Specifically, is the step size of the diffusion process consistent, or is it adjusted according to the value of $\rho_k$ ?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: No limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your efforts on reviewing our paper. Below we provide point-by-point responses to your comments:
> Weakness 1
We believe that our work has several significant contributions over the existing works [1] and [2]. Moreover, we highlight that according to [NeurIPS 2024 policy](https://neurips.cc/Conferences/2024/CallForPapers), our work should be considered as a concurrent work with [2] and "not be rejected on the basis of the comparison to contemporaneous work" as [2] was first published on arXiv on March 25th, 2024, which is within two months of the submission deadline of NeurIPS 2024.
- Compared to [1], the significance of our work lies in three aspects:
- Nonlinear inverse problems are beyond the scope of [1], while we have investigated three different nonlinear inverse problems.
- We have adopted a more rigorous formulation than [1]. As also pointed out by [2], the denoiser design based on diffusion models in [1] is heuristic.
- Unlike PnP-SGS [1], we consider an annealing strategy, which is important for ill-posed inverse problems.
- Compared to [2], our work have the following three contributions:
- Our work proposes a more general formulation based on EDM. To the best of our knowledge, using EDM as a rigorous Bayesian denoiser for solving inverse problems has not been explored in prior literature.
- Our proposed method has a significant improvement in performance over DPnP [2]. Response Table 2 in the response to reviewer 43sJ below further shows the statistical significance of the improvement. One reason for the performance gain is that we inherited the optimized design choices from EDM [3], which provides better image quality with less diffusion steps. Another reason is that the flexibility of the framework provides a larger design space with more flexible parameter choices.
> Weakness 2
We provide a comparison on computational efficiency for all methods in Response Table 1 above. Our method PnP-DM achieves superior performance while remaining comparable in runtime with DPS, with a runtime no more than 1.5 times that of DPS. In fact, for the linear inverse problem experiments, PnP-DM was faster than DPS because we were able to take multiple samples along one Markov chain generated by PnP-DM but DPS requires running multiple diffusion processes.
> Weakness 3
- **Fixed $\rho$**. Our current analysis considers a fixed $\rho$ across different iterations, but we find that a variable schedule for $\rho$ is more practical. Theoretically, our analysis could be extended to the setting with a variable $\rho$ by viewing the iterations with larger $\rho$ values as warm-up iterations that produce favorable initial conditions for the algorithm with smaller $\rho$ values.
- **Approximate score bound**. We can extend our convergence bound to the case where the learned score has error, which we can include in our revised manuscript. More precisely, assume we implement the EDM reverse diffusion step (as given by equation (10) in the manuscript) as: $$\mathrm{d} \boldsymbol{x}\_t=\left[u(t) \boldsymbol{x}\_t- v(t)^2 s\_t\left(\boldsymbol{x}\_t\right)\right] \mathrm{d} t + v(t)\mathrm{d}\bar{\boldsymbol{w}}\_t,$$
where $s\_t$ is the approximate score. Then, by differentiating the KL divergence along the two dynamics or using the Girsanov theorem (in the spirit of the proof in [4]), we can prove the following: for $\tau \in [k(1+t^\ast)+1, (k+1)(1+t^\ast)]$, which corresponds to the prior step in the Split Gibbs Sampler, we have:
$$\frac{1}{4}\int\_{k(1+t^\ast)+1}^{(k+1)(1+t^\ast)} \lambda(\tau)\text{FI}(\pi\_\tau\Vert \nu\_\tau)\mathrm{d}\tau \leq \epsilon\_{\text{score}}(k) + \text{KL}(\pi\_{(k+1)(1+t^\ast)}\Vert \nu\_{(k+1)(1+t^\ast)}) - \text{KL}(\pi\_{k(1+t^\ast)+1}\Vert \nu\_{k(1+t^\ast)+1})$$
where $\epsilon\_{\text{score}}(k) := \int\_{0}^{t^\ast} v(t)^2\mathbb{E}||s\_t(\boldsymbol{x}\_t^{(k)}) - \nabla \log p\_t(\boldsymbol{x}\_t^{(k)})||^2 \mathrm{d} t$ and $\boldsymbol{x}\_t^{(k)}, 0\leq t \leq t^\ast$ is the exact EDM reverse process, which satisfies equation (10) in the manuscript, starting at $\boldsymbol{x}\_0^{(k)} \sim \nu\_{k}^Z$ and ending up at $\boldsymbol{x}\_{t^\ast}^{(k)} \sim \nu\_{k+1}^X$. Therefore, up to score approximation errors, a convergence bound analogous to Theorem 3.1 will hold.
- **Convergence to the true posterior**. Indeed, the asymptotic convergence to the true posterior as $\rho\to 0$ is difficult to obtain (see Section 5 of [5]). We agree that this is out of the scope of the paper, so we leave it to future work.
> Question 1
We trained a denoiser for the Gaussian image prior. More details are presented in Appendix C.2 "Model checkpoint" paragraph.
> Question 2
Please refer to Appendix D for how we chose the hyperparameters for all the methods. For our method, we fine-tuned the parameters in the annealing schedule $\rho_0$, $\alpha$, and $\rho_\min$ (notations from Appendix C.3) by performing a grid search over 20 FFHQ images outside of the test set we used for final reporting (referred to as the validation set hereafter). For DDRM and DPS on linear inverse problems, we used the default parameters in their official repositories. For all other cases, we performed a grid search of the main parameter(s) of each method on the validation set. Specifically, for DPIR, we used the same annealing function as that in the official repository of [6] but fine-tuned the starting/ending noise level and number of iterations.
> Question 3
We did not employ any explicit regularization parameters. Within the Bayesian framework, the data-fidelity (likelihood) and regularization (prior) should be automatically balanced based on the noise distribution.
> Question 4
The discretization time steps are the same for all values $\rho_k$. The number of iterations that are actually run depends on the specific value of $\rho_k$. See Appendix C.2 “Pseudocode” paragraph and Algorithm 3 for more details.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I thank the authors for their detailed answers. Here are additional remarks / questions.
**W1:**
1. Although not demonstrated in the experiments, I believe the approach in [1] could be equally applied to nonlinear inverse problems.
2. Could you clarify why you consider the denoiser design based on diffusion models in [1] to be heuristic?
3. The annealing strategy, despite not being adopted in [1], is commonly used in Plug-and-Play literature. However, incorporating this strategy introduces a discrepancy between your theoretical framework and experimental results, which is concerning.
4. I acknowledge that using EDM as the diffusion model offers a more flexible and optimized framework.
**W2:**
I do not understand why the clock time does not scale linearly with NFE. It appears that the operations outside the denoiser evaluation in both DPS and your method are relatively insignificant in time.
**W3:**
1. I strongly disagree with your statement, as you did not run the algorithms with fixed $\rho$ even for smaller values.
2. Can you explicitly state the final convergence bound analogous to Theorem 3.1, including the time integral from 0 to T, that would result from this additional theoretical consideration?
---
Rebuttal 2:
Comment: Thank you for your response and comments. Below we answer them point by point.
> **W1.1**
Indeed, we have tried to apply PnP-SGS [1] to nonlinear inverse problems by combining our likelihood step with its prior step design, which was how we obtained the PnP-SGS results for Table 2. While it could still handle the coded diffraction pattern problem, it failed on the more challenging Fourier phase retrieval problem. We also tried to include the annealing strategy for PnP-SGS but found that the method diverged with large $\rho$, probably due to its heuristic design of the prior step, and thus did not benefit from annealing. Overall, our method significantly outperforms PnP-SGS by at least 1dB in PSNR for all linear problems and coded diffraction patterns, and 15dB for Fourier phase retrieval. Our experimental results indicate that PnP-SGS struggles with challenging nonlinear inverse problems, such as Fourier phase retrieval.
> **W1.2**
There are mainly two aspects. Here we explain using the notations in the original DDPM paper. First, to rigorously implement the prior step of SGS, one need to assume the input as an observation of the *unscaled* image $\boldsymbol{x}_0$ with additive white Gaussian noise. However, in DDPM, the mean of state $\boldsymbol{x}\_t$ is not $\boldsymbol{x}\_0$, but $\sqrt{\bar{\alpha}\_t}\boldsymbol{x}\_0$, which is a *down-scaled* version of $\boldsymbol{x}\_0$. So, it is inaccurate to directly use DDPM as a denoiser. This mismatch is particularly significant when starting from a large $t^\ast$ as $\sqrt{\bar{\alpha}\_{t^\ast}}$ is close to 0. Second, according to the SGS formulation, the noise estimation module is unnecessary, and one should always denoise $\boldsymbol{z}^{(k)}$ (the $\boldsymbol{z}$ iterate of SGS at iteration $k$) assuming a noise level $\rho$. However, PnP-SGS does not take into account the hyperparameter $\rho$ in the denoising problem. It is unclear how the generated $\boldsymbol{x}^{(k+1)}$ relates to the target conditional distribution $\pi^{X|Z=\boldsymbol{z}^{(k)}}(\boldsymbol{x})$ for each prior step of SGS. Therefore, the posterior distribution sampled from the denoiser in [1] is not the desired posterior distribution even with a perfect score function.
> **W1.3**
We never claimed that the idea of annealing is our contribution. Nevertheless, we believe that it is our contribution to propose a general framework that accommodates annealing in an easy yet rigorous way for SGS and leads to significant performance improvement. Our focus is more on the algorithm design and experimental validation. To the best of our knowledge, no existing theory on SGS provides a non-asymptotic convergence bound with annealing, so this is a a challenging open question. Although our current theory assumes fixed $\rho$, it still provides some theoretical insights on the algorithm behavior, such as the interpretation of $\rho$ as step size, and potentially opens up new theoretical directions for SGS based on the Fisher information. We hope to extend the analysis to the annealing case in the future.
> **W2**
We respectfully disagree with your last statement. DPS requires backpropagating through the entire denoiser network for each diffusion step after the forward pass of the denoiser, introducing a significant computational overhead. On the other hand, our method does not require doing so. This is the main reason why the clock time does not scale linearly with NFE for these two methods. Moreover, unlike DPS that applies a likelihood update for every function evaluation. Our method does so only every multiple function evaluations.
> **W3.1**
As we showed in Appendix C.3 "Annealing schedule for $\rho$" paragraph, our annealing schedule decreases to a fixed minimum level at $\rho_{\min}$, so we indeed fixed $\rho$ at small values after certain numbers of iterations. Furthermore, we did run our algorithms for the synthetic prior experiments with a fixed schedule of $\rho$ throughout the process, as shown in Table 4. Experimental results show that our method can accurately sample the posterior, which corroborates our theory. For experiments on linear inverse problems and coded diffraction patterns with FFHQ images, the results with fixed $\rho$ are on par with those with annealing, probably because the problems are not highly non-convex. For more challenging nonlinear problems, such as the Fourier phase retrieval, we empirically find that an annealing schedule is essential to overcome the non-convexity of the problem and provide accurate reconstruction.
> **W3.2**
By combining the above argument with those in the manuscript, we will get the following bound with score function error: $$\frac{1}{T} \int_0^{T} \mathsf{FI}\left(\pi_\tau || \nu_\tau\right) \mathrm{d} \tau \leq \frac{4\mathsf{KL}(\pi^X||\nu_0^X)}{K(1+t^\ast) \min(\rho, \delta)^2} + \frac{1}{K(1+t^\ast)\delta^2}\sum_{k=1}^K\epsilon_{\text{score}}(k).$$
---
Rebuttal Comment 2.1:
Title: Response to authors
Comment: I thank the authors for their careful answer.
I think that the manuscript would benefit from a more detailed comparison with Ceurdoux et al. and from an extended theoretical analysis with approximate score. Also if should be made clear that the theory is only with fixed rho parameter. With these clarifications I am ready to recommend acceptance.
---
Reply to Comment 2.1.1:
Comment: We appreciate your constructive feedback on our work. We will include a more detailed comparison with Coeurdoux et al. and the theoretical analysis with an approximate score function in the final version. We will also clarify that the current theory is only with fixed $\rho$. | null | null | null | null | null | null |
A Combinatorial Algorithm for the Semi-Discrete Optimal Transport Problem | Accept (poster) | Summary: The paper presents a primal-dual algorithm for approximately solving the semi-discrete optimal transport problem. The algorithm runs $\log \Delta/\epsilon $ scales, starting with the scale $\delta = \Delta^2$ and halving it in each round. The idea is to maintain a $\delta$-feasible weight function during the course of the algorithm, which in each round constructs a residual graph from the three levels of expansion of the Voronoi diagram, and augments the transport plan along augmenting paths. To ensure the invariants and that the algorithm terminates, the algorithm considers only admissible paths and performs a procedure to eliminate cycles after each round. The proposed algorithm improves the running time from $n^9$ in Agarwal et al. to $n^4$ and the size of the graph from $n^5$ to $n^3$. The algorithm also extends to any dimension $d > 2$ and any $p\ge 1$-Wasserstein distance.
Strengths: - The paper presents a set of strong results that significantly improve known results on semi-discrete optimal transport problem.
- Although I didn't verify all the proofs, I think the algorithm and procedures and the idea of using admissibility for DFS to avoid and remove cycles are all reasonable.
Weaknesses: - There isn't any big weakness I can see.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Could the authors provide some more background on the construction of Voronoi diagram, ie, the complexity of the construction?
- Is there any lower bound on the runtime of algorithms for the semi-discrete OT problem?
- How is Theorem 1.2 compared with other algorithms for the discrete OT problem?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thorough review. We answer your questions below.
----
>Could the authors provide some more background on the construction of Voronoi diagram, ie, the complexity of the construction?
**Response:**
For 2 dimensions, the weighted Voronoi diagram under the squared Euclidean distance (also known as the Laguerre diagram or the power diagram) can be constructed in $O(n\log n)$ time [1, 2]. For higher dimensions $d>2$, the construction time would be $O(n^{\lceil (d+1)/2\rceil})$ [3].
----
[1] Fortune, Steven. "Voronoi diagrams and Delaunay triangulations." In Handbook of Discrete and Computational Geometry, pp. 705-721, 2017.
[2] Sugihara, Kokichi. "Laguerre Voronoi diagram on the sphere." Journal for Geometry and Graphics 6, no. 1 (2002): 69-81.
[3] Aurenhammer, Franz. "Power diagrams: properties, algorithms and applications." SIAM Journal on Computing 16, no. 1 (1987): 78-96.
----
----
>Is there any lower bound on the runtime of algorithms for the semi-discrete OT problem?
**Response:** In 2 dimensions, there are no known sub-quadratic algorithms even for the discrete version of the OT problem. The semi-discrete OT problem is a generalization of the discrete OT problem. So, to obtain a runtime better than $O(n^2)$, one would expect that a subquadratic-time algorithm for the discrete OT would come first.
For higher dimensions, as stated in our paper, computing an $\varepsilon$-close semi-discrete transport plan in a time that is polynomial in $d$ and $\log 1/\varepsilon$ is known to be \#$P$-Hard [4].
----
[4] Taşkesen, Bahar, Soroosh Shafieezadeh-Abadeh, and Daniel Kuhn. "Semi-discrete optimal transport: Hardness, regularization and numerical solution." Mathematical Programming 199, no. 1 (2023): 1033-1106.
----
----
> How is Theorem 1.2 compared with other algorithms for the discrete OT problem?
**Response:**
Thank you for this question. Given $\mu$ (with support size $N$) and $\nu$ (with support size $k$), the execution times of the best-known discrete OT algorithm is almost-linear in the number of edges of the bipartite graph, i.e., $(Nk)^{1+o(1)}$ [5, 6]. In contrast, by preprocessing $\mu$ in near-linear time, we can compute a discrete OT plan for any query $\nu$ in $O(\sqrt{N}k^{4}\log 1/\delta)$ time. Thus, when $k$ is sufficiently small (say $k < N^{1/6}$), our algorithm is faster than all existing discrete OT algorithms. We will highlight this improvement to discrete OT algorithms in the next version of our paper.
----
[5] Chen, Li, Rasmus Kyng, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg, and Sushant Sachdeva. "Maximum flow and minimum-cost flow in almost-linear time." FOCS 2022.
[6] Agarwal, Pankaj K., Kyle Fox, Debmalya Panigrahi, Kasturi R. Varadarajan, and Allen Xiao. "Faster Algorithms for the Geometric Transportation Problem." SoCG 2017.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response. I maintain my current score. | Summary: The paper studies the semi-discrete optimal transport problem, i.e. a generalization of bipartite matching where one side of the graph is not finite, but instead a probability distribution on some subset of the R^d.
It gives an algorithm that computes an optimal transport plan (up to some additive $\varepsilon) to such instances subject to an oracle that can integrate the density function of the continuous distribution in constant time over some given triangle.
The algorithm proposed is an adaption of the solution proposed by Agarwal et al. [SODA 24] which makes some improvements to depress the degree of the polynomial dependence of the runtime on the size of the discrete distribution in the input.
The authors further note that their methods yield a kind of sub-linear time online algorithm for discrete optimal transport where one (large) side of the bipartite graph is fixed and the smaller side is subject to updates. This is achieved by representing the larger side as a continuous distribution which can be efficiently sampled.
Note that I was not able to check the appendix where essentially all proofs live.
Strengths: The paper is clear and easy to follow, and improves the state of the art on a very relevant topic (optimum transport).
Weaknesses: - The work is largely a reworking of other recent results in the field, in particular the paper of Agarwal et al. at SODA24. It's not clear that improving the polynomial dependence here is highly relevant.
Technical Quality: 3
Clarity: 3
Questions for Authors: No questions
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitiations and impact have been adressed fully, this is a theoretical result with no negative consequences to be expected.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thoughtful review. We address your concern below.
> The work is largely a reworking of other recent results in the field, in particular the paper of Agarwal et al. at SODA24. It's not clear that improving the polynomial dependence here is highly relevant.
**Response:** Improvement in the run time is only one part of our contribution. What we consider the most important contribution is the new algorithmic framework we propose for the semi-discrete OT problem.
The algorithm from SODA 2024 discretizes a continuous distribution and then solves an instance of the discrete OT problem defined on these points in $\tilde{O}(n^9)$ time.
In contrast, our algorithm extends the classical combinatorial primal-dual approach for discrete OT to the semi-discrete setting. It applies ideas such as augmenting paths on residual graphs directly to continuous regions (rather than samples from the regions). To our knowledge, our paper is the first to introduce this framework. As a result of the new combinatorial framework, we obtain a significant reduction in execution time to $\tilde{O}(n^4)$.
We remark that polynomial improvements to discrete OT have had a significant impact on ML applications [1, 2, 3, 4].
Similar improvements for the semi-discrete OT problem is an important challenge and we make significant progress in addressing this challenge.
----
[1] Cuturi, Marco. "Sinkhorn distances: Lightspeed computation of optimal transport." NeurIPS 2013.
[2] Altschuler, Jason, Jonathan Niles-Weed, and Philippe Rigollet. "Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration." NeurIPS 2017.
[3] Lahn, Nathaniel, Deepika Mulchandani, and Sharath Raghvendra. "A graph theoretic additive approximation of optimal transport." NeurIPS 2019.
[4] Jambulapati, Arun, Aaron Sidford, and Kevin Tian. "A direct $\tilde{O}(1/\varepsilon)$ iteration parallel algorithm for optimal transport." NeurIPS 2019. | Summary: This paper proposes a novel combinatorial algorithm for the problem known as semi-discrete optimal transport. The proposed method constructs a residual graph by considering the cells of a $\delta$-expanded Voronoi diagram, a relaxed concept of a weighted Voronoi diagram, as vertices. The algorithm performs augmentation on the residual graph while scaling $\delta$. This approach significantly reduces the theoretical computational complexity compared to existing methods. Furthermore, the proposed method can be applied to discrete optimal transport with large supports, enabling sublinear time responses to OT queries with respect to support size through preprocessing.
Strengths: * The paper makes a significant contribution to the important problem of semi-discrete optimal transport in the field of machine learning.
* The proposed method drastically reduces computational complexity compared to existing methods. Additionally, its application to query responses through preprocessing also demonstrates excellent computational efficiency.
* The paper is exceptionally well-written, making it easy to follow the ideas despite the challenging content.
Weaknesses: * Understanding that this is a theoretical paper, it is important to note that the lack of numerical experiments makes it difficult to assess the practical applicability of the proposed method.
* While this is not a weakness of the paper, I am not able to fully verify the validity or the details of the proofs and methodology, and therefore cannot guarantee their correctness.
Technical Quality: 3
Clarity: 4
Questions for Authors: * Is this method an implementable and practically applicable algorithm, or is it challenging to apply in real-world scenarios due to large constant factors, thus possessing only theoretical value?
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The authors properly address the limitations of the method within the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your insightful review. We answer your question below.
>Is this method an implementable and practically applicable algorithm, or is it challenging to apply in real-world scenarios due to large constant factors, thus possessing only theoretical value?
**Response:**
The main contribution of the paper is a new algorithmic framework for the semi-discrete OT problem, which we believe has much potential. Overall, our algorithm is practical and the constants hiding in the big-O notation are small. However, faithful implementation of our algorithm requires (a) constructing and dynamically maintaining arrangements of Voronoi diagrams, and (b) computing the exact mass inside any given triangle. Developing efficient and robust software that combines our algorithm with black-boxes (a) and (b) is a significant task. An interesting direction of future research is to explore which existing geometric software libraries, including the ones based on GPUs, can be used for our setting.
Recollect that our algorithm does not make any assumptions on the smoothness of the continuous distribution.
In future work, we will investigate whether we can significantly simplify our algorithm if we are willing to make certain smoothness assumptions (similar to the ones made in existing work [1, 2]) about the continuous distribution.
----
[1] Oliker, Vladimir I and Laird D Prussner. ``On the numerical solution of the equation $\frac{\partial^{2}z}{\partial x^2} \frac{\partial^2 z}{\partial y^2} - \left (\frac{\partial^2 z}{\partial x \partial y} \right) = f$ and its discretizations, $i$.'' Numerische Mathematik, 54(3):271–293, 1989.
[2] Merigot, Quentin, and Boris Thibert. "Optimal transport: discretization and algorithms." In Handbook of numerical analysis, vol. 22, pp. 133-212. Elsevier, 2021.
---
Rebuttal Comment 1.1:
Title: Comments on the rebuttal
Comment: I have reviewed the authors' response. My concerns have been resolved. I will maintain my current score. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation | Accept (poster) | Summary: This paper introduces E2ENet, a 3D medical image segmentation model designed for efficiency and performance. E2ENet incorporates Dynamic Sparse Feature Fusion (DSFF) to adaptively fuse informative multi-scale features and a Restricted Depth-Shift mechanism in 3D convolution to maintain low model complexity. Extensive experiments demonstrate that E2ENet achieves a superior trade-off between accuracy and efficiency, significantly reducing parameter count and FLOPs compared to previous methods, particularly in large-scale datasets like AMOS-CT.
Strengths: Improved Efficiency: E2ENet significantly reduces the parameter count and FLOPs, making it more computationally efficient and suitable for deployment on resource-limited hardware without compromising on performance.
Innovative Mechanisms: The introduction of Dynamic Sparse Feature Fusion (DSFF) and Restricted Depth-Shift in 3D convolution effectively balances the need for high accuracy with lower computational complexity, offering a novel approach to 3D medical image segmentation.
Robust Validation: Extensive experiments on multiple challenging datasets demonstrate E2ENet's consistent performance, ensuring its reliability and applicability in various medical imaging scenarios, especially for exceeding nnUNet.
Weaknesses: 1. Unclear Backbone Network: The backbone network of E2ENet is not clearly defined. Figure 2 labels the left part as an "efficient backbone," but it is ambiguous whether this refers to an EfficientNet-based backbone or simply a group of CNN layers. This lack of clarity can confuse readers and detract from the paper's overall comprehensibility.
2. Segmentation Performance and Kernel Size: The segmentation performance is primarily evaluated on the BraTS and AMOS datasets. However, the ablation study on AMOS does not adequately address the suitability of the DSFF kernel size for BraTS. Table 4 lacks results for a kernel size of 3x3x3, which raises questions about the generalizability of the chosen kernel sizes across different datasets.
3. Lack of Comparison with Recent Lightweight Networks: The paper does not include comparisons with recent lightweight network structures specifically designed for medical image analysis. This omission limits the assessment of how E2ENet stands relative to other contemporary, efficient models and reduces the comprehensiveness of the evaluation.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Clarification on the Backbone Network:
Question: Can you provide a more detailed description of the backbone network used in E2ENet? Specifically, is it an EfficientNet-based backbone, or is it composed of a different set of CNN layers?
Suggestion: Consider revising Figure 2 and the corresponding text to clearly define the architecture of the backbone network. Providing explicit details will enhance the readers' understanding and reduce ambiguity.
2. Kernel Size Generalization:
Question: Why was the 3x3x3 kernel size not included in the ablation study results for the BraTS dataset in Table 4?
Suggestion: It would be beneficial to include the results for the 3x3x3 kernel size in the ablation study for the BraTS dataset. This would help in understanding the generalizability of the DSFF mechanism across different datasets and ensure that the chosen kernel sizes are suitable for various segmentation tasks.
3. Comparison with Recent Lightweight Networks:
Question: Have you considered comparing E2ENet with other recent lightweight network structures specifically designed for medical image analysis?
Suggestion: Including comparisons with recent lightweight networks would strengthen the paper by providing a more comprehensive evaluation of E2ENet's performance. This could involve benchmarking against models that are known for their efficiency and effectiveness in medical image segmentation.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## To Reviewer SYnH
We would like to thank the reviewer for your thoughtful and detailed comments. We are glad that you appreciate the efficiency improvements and robust validation presented in our paper. We address your comments below.
> Unclear Backbone Network: The backbone network of E2ENet is not clearly defined. Figure 2 labels the left part as an "efficient backbone," but it is ambiguous whether this refers to an EfficientNet-based backbone or simply a group of CNN layers. This lack of clarity can confuse readers and detract from the paper's overall comprehensibility.
Thank you for pointing out the description of the backbone network, which indeed helps to enhance the readers' understanding and reduce ambiguity. We will provide this information below and add it to our paper.
The efficient backbone network consists of several levels as in Figure 2, each comprising two consecutive blocks. Each block includes our proposed efficient Restricted Depth-Shift 3D Convolutional layer, followed by instance normalization and ReLU activation (termed **conv–norm–relu**). After each level, the downsampling is performed using a strided convolution operation in the second block of that level (the convolution in the second block of the new resolution has a stride >1).
Table: Backbone Network – Each row describes level $i$, with input resolution $H^i$, $W^i$, $D^i$, and output channels $C^i$, given an input size of 128×128×128.
| level $i$ | Operator | Resolution $H^i$, $W^i$, $D^i$ | Channels $C^i$ |
| -------- | -------- | -------- | --- |
| 1 | conv–norm–relu + conv–norm–relu | 128x128x128 | 48 |
| 2 | conv–norm–relu + conv–norm–relu | 64x64x64 | 96 |
| 3 | conv–norm–relu + conv–norm–relu | 32x32x32 | 192 |
| 4 | conv–norm–relu + conv–norm–relu | 16x16x16 | 320 |
| 5 | conv–norm–relu + conv–norm–relu | 8x8x8 | 320 |
> Kernel Size Generalization: Question: Why was the 3x3x3 kernel size not included in the ablation study results for the BraTS dataset in Table 4? Suggestion: It would be beneficial to include the results for the 3x3x3 kernel size in the ablation study for the BraTS dataset.
Thank you for your valuable suggestion. Indeed, a 3x3x3 kernel size is helpful in the ablation study for the BraTS dataset as well. This will aid in understanding the generalizability of the DSFF mechanism across different datasets. Here, we update these results as follows and include them in Table 4 of the paper:
| w/ DSFF | shift | kernel size | ED | ET | NET | mDice | Params | FLOPs |
| ------- | ----- | ----------- | --- | --- | --- | ----- | ------ | ----- |
| No | No | 3x3x3 | 80.9 | 61.9 | 79.1 | 74.0 | 52.55 | 4519.26|
| Yes | No | 3x3x3 | 81.0 | 62.2 | 79.1 | 74.1 | 28.02 | 2023.52 |
| No | Yes | 1x3x3 | 81.0 | 62.3 | 79.0 | 74.1 | 23.89 | 3071.78|
| Yes | Yes | 1x3x3 | **81.2** | **62.7** | **79.5** | **74.5** | 11.24| 1067.06|
We find that, compared to kernel sizes of 3x3x3, E2ENet with kernel sizes of 1x3x3 combined with a depth shift (3rd and 4th row) maintains or even improves segmentation accuracy, whether with DSFF or without DSFF. This further demonstrates that our **proposed efficient Restricted Depth-Shift 3D Convolutional layer, which utilizes a 1x3x3 kernel with restricted depth shift, is equivalent to a 3x3x3 kernel in terms of segmentation accuracy.** Moreover, it offers significant **savings in computational and memory resources**, as observed in the AMOS-CT dataset in Table 3.
> Comparison with Recent Lightweight Networks: Question: Have you considered comparing E2ENet with other recent lightweight network structures specifically designed for medical image analysis? Suggestion: Including comparisons with recent lightweight networks would strengthen the paper by providing a more comprehensive evaluation of E2ENet's performance.
Thank you for your suggestion. We agree that comparisons with recent lightweight networks would strengthen the paper. We have included another recent efficient lightweight network, UNETR++ [1], as our baseline. UNETR++ offers both high-quality segmentation accuracy and efficiency. From the results below, we can see that E2ENet achieves better mDice scores. This further verifies the efficiency and accuracy of our proposed model for 3D medical segmentation.
Moreover, we conducted an extended ablation study by integrating our proposed DSFF into UNETR++[1], referred to as **UNETR++ w/ DSFF** in the table below. Specifically, we introduced dynamic sparsity into UNETR++ by sparsifying its connections: **less important activated connections are removed, while the same number of deactivated connections are randomly reactivated during training**. The results show that parameters can be further reduced while maintaining stable mDice performance. This further demonstrates the effectiveness of our proposed DSFF, showing that it can be **easily integrated into recent lightweight networks to potentially enhance efficiency while maintaining mDice performance.**
| model | ED | ET | NET | mDice | Params
----------- | --- | --- | --- | ----- | ------ |
| UNETR++ | 80.2 | 61.0 | 78.7 | 73.3 | 33.8 |
| UNETR++ w/ DSFF | 80.4 | 61.4 | 78.4 | 73.4 | 5.11 |
| E2ENet | 81.2 | 62.7 | 79.5 | 74.5 | 11.24|
[1] Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang and Fahad Shahbaz Khan. UNETR++: Delving into Efficient and Accurate 3D Medical Image Segmentation. IEEE Transactions on Medical Imaging, 2024.
We thank you again for the time and effort you've taken to participate in the review of our paper. If you have further questions and concerns, we are more than happy to discuss with you.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for addressing my concerns, i would raise my score to week accept.
---
Rebuttal 2:
Title: Thank you for your support!
Comment: Dear Reviewer SYnH,
We sincerely thank you for your constructive comments and support!
Best,
Authors | Summary: The paper introduces E2ENet, a novel neural network designed for 3D medical image segmentation, which emphasizes efficiency in computational resource usage without compromising accuracy. This paper introduces a Dynamic Sparse Feature Fusion (DSFF) mechanism that adaptively learns to integrate multi-scale features effectively and a novel application of restricted depth-shift in 3D convolution that aligns with the computational simplicity of 2D methods. The model demonstrates superior performance on various benchmarks like AMOS-CT challenge and BraTS Challenge in MSD, showcasing significant reductions in parameters and computational costs while maintaining competitive accuracy.
Strengths: (1) DFSS mechanism proposed provides a more efficient feature fusion process while reducing the computational and memory overhead. (2) E2ENet integrates depth-shift strategy in 3D convolution networks, enabling the ability for network to capture 3D spatial relationships. (3) E2ENet significantly reduces parameter size to 7.63 M minimally.
Weaknesses: Section 3.2 Ablation Studies lacks of more insights about Table 3, detail question will be shown in Question part below.
Technical Quality: 4
Clarity: 3
Questions for Authors: For Section 3.2 Table 3, it seems that for the different shift size, the mDice score does not vary much from each other. What are authors’ insights about this? Based on this, how do authors justify in Section 2.3 that the shift magnitude would have a negative impact on the effectiveness of the shift operation?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes, authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## To Reviewer JUDh
Thank you very much for taking the time to review our paper and for your helpful comments. We provide detailed responses to your constructive feedback below.
> **For Section 3.2 Table 3, it seems that for the different shift size, the mDice score does not vary much from each other. What are authors’ insights about this?**
Thank you for your enthusiasm for our paper and for sharing your concerns with us. Indeed, we observe a marginal difference in mDice, with a decrease of 0.4 when the shift size magnitude is increased from 1 to 3. However, for mNSD, there is a significant performance drop from 82.3 to 81.6. This is because the mDice score primarily measures the **overlap between the predicted segmentation and the ground truth**, whereas mNSD focuses on the **distance between the surfaces (boundaries)**. This demonstrates that shift size is more impactful on boundary alignment.
Furthermore, in additional experiments where we extended the shift size magnitude to 7, we observed a further decrease in performance by 2.5 for mDice and 4.8 for mNSD, compared to a shift size magnitude of 1. The detailed results are shown below. In summary, the shift operation is beneficial, resulting in a 1.9 improvement in mDice compared to not using the shift operation. However, our experiment results indicate that the magnitude should not be set too large. We will explain more below.
| w/ shift | shift size | kernel size | mDice |mNSD |
| ----- | ----------- | --- | --- | --- |
| No | -- | 1x3x3 | 88.6 | 78.6 |
| Yes | (−1, 0, 1) | 1x3x3 | **90.1** | **82.3** |
| Yes | (−2, 0, 2) | 1x3x3 | 89.8 | 82.0 |
| Yes | (−3, 0, 3) | 1x3x3 | 89.7 | 81.6 |
| Yes | (−7, 0, 7) | 1x3x3 | 87.6 | 77.5 |
> **How do authors justify in Section 2.3 that the shift magnitude would have a negative impact on the effectiveness of the shift operation?**
In Section 2.3, we propose the Restricted Depth-Shift operation, which aims to capture depth-wise information while maintaining a 2D computation cost. However, increasing the shift size means considering more depth-wise information **at the expense of channel-wise information**, leading to an insufficient representation of channels, as discussed in Section 3. Additionally, a large shift size causes a **loss of local spatial relationships**, which are crucial for segmentation. This results in a blurring effect that reduces the precision of boundary alignment, particularly affecting metrics like mNSD, which rely heavily on accurate boundary information.
We would like to thank you again for your time and effort. If you have further questions, we are more than happy to discuss them with you.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer JUDh, we appreciate your valuable feedback and constructive comments. Since there is only one day left in the rebuttal process, we want to know if our response addresses your concerns. | Summary: In this paper, the authors propose a novel architecture that addresses the challenges observed in increasing the model size and computational complexity of neural network architectures. This leads to concerns in the deployment stage, mainly because of resource-limited hardware. The authors propose a 3D medical image segmentation model named Efficient to Efficient Network (E2ENet). They incorporated two designs to make the model efficient while preserving accuracy: Dynamic sparse feature fusion (DSFF) mechanism and Restricted depth-shift in 3D convolution. Extensive experiments on three benchmarks show that E2ENet consistently achieves a superior trade-off between accuracy and efficiency compared to prior state-of-the-art baselines.
Strengths: The paper is well-organized and well-written.
The paper reads well.
The motivation behind the study is clear.
The proposed restricted depth shift method is interesting and somewhat novel.
Weaknesses: Tables 3 and 4 don’t provide consistent results. It seems that the combination that works for CT does not optimally work for MRIs.
Missing discussion on limitations.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can the proposed method accelerate training speed in terms of time to achieve SOTA accuracies?
Could the authors provide information on the inference times?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors didn't discuss the limitations of the proposed methodology.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## To Reviewer xwUb
We sincerely thank you for your time and effort in reviewing our paper. We are glad that you find our work interesting and novel. We address your comments below.
> Tables 3 and 4 don’t provide consistent results. It seems that the combination that works for CT does not optimally work for MRIs.
For the mDice score, in Table 3, the shift operation in E2ENet with and without DSFF combination scores 90.1 and 90.2, respectively. In Table 4, with and without DSFF scores 81.2 and 81.0, respectively. These results rank among the top in these tables and show marginal differences in performance.
However, with the DSFF combination, while it does not significantly improve the mDice scores, the computational cost in terms of parameters is notably reduced from 23.9M to 11.2M, a **2-fold reduction** for both Table 3 and Table 4.
- Performance: The mDice scores show marginal differences, indicating that the shift operation combined with DSFF **maintains comparable performance** to the shift operation without DSFF.
- Efficiency: The incorporation of DSFF into E2ENet results in a substantial **reduction in the number of parameters and FLOPs**, achieving a significant computational cost reduction.
- Conclusion: As discussed in Section 3.2, our main conclusion is that dynamic sparse feature fusion significantly reduces computational resources while maintaining high performance without any significant degradation.
Furthermore, in Appendix B.7.3, we provide additional verification on the **statistical significance of the designed modules: DSFF and depth-shift operation**.
> Can the proposed method accelerate training speed in terms of time to achieve SOTA accuracies? Could the authors provide information on the inference times?
Thank you for your enthusiasm for our paper and for sharing your concerns with us.
Given the relatively restricted support for sparse operations in current off-the-shelf commodity GPUs and TPUs without sparsity-aware accelerators, we did not attempt to achieve practical speedup during training. Instead, we chose to implement our models with binary masks in our work. As mentioned extensively in the conclusion and appendix, the promising benefits of dynamic sparsity presented in this study have not yet translated into actual speedup. Accelerating training time will be a focus for our next work.
Although not the focus of our current work, it would be interesting for future work to examine the speedup results of sparse operation during training, using such specialized hardware accelerators, as we see much improvement room of promise here. For example, at high unstructured sparsity levels, XNNPACK (https://github.com/google/XNNPACK) has already shown significant speedups over dense baselines on smartphone processors.
**For the inference time:**
Although the support for unstructured sparsity on GPUs remains relatively limited, its practical relevance has been widely demonstrated on non-GPU hardware, such as CPUs and customized accelerators. For instance, FPGA accelerators for an unstructured sparse RNN achieved high acceleration and energy efficiency compared to commercial CPUs and GPUs by maximizing the use of embedded multiply resources on the FPGA. Another notable success is DeepSparse (https://github.com/neuralmagic/deepsparse), which successfully deploys large-scale BERT-level sparse models on modern Intel CPUs, achieving a 10× model size compression with less than 1% accuracy drop, a 10× CPU-inference speedup with less than 2% accuracy drop, and a 29× CPU-inference speedup with less than 7.5% accuracy drop.
Inspired by these advancements, we adopted an approach based on DeepSparse. We conducted experiments with patches of images sized 32×32×32 as input, comparing the CPU wall-clock timings for online inference between our proposed E2ENet and nnUNet on an Intel Xeon Platinum 8360Y CPU with 18 cores. We acknowledge that, while **our proposed models with sparsity do achieve speedups in practical inference**, they are not as pronounced as those observed with BERT-level sparse models. This is primarily due to the nature of segmentation and 3D convolution operations. However, this presents a promising avenue for our future work.
| Methods | nnUNet | E2ENet(s=0.8) | E2ENet(s=0.9)|
| -------- | -------- | -------- | --- |
| Latency(ms) | 10.07 | 8.02 | **7.28** |
| Throughput(items/sec) | 99.19 | 124.60 | **137.13** |
| speedup | 1.0x | 1.26x | **1.38x** |
> Missing discussion on limitations.
We appreciate the reviewer pointing this out. I will make it more clear in the final version. Our proposed model leverages unstructured dynamic sparsity; however, due to the relatively restricted support for unstructured sparsity in current off-the-shelf commodity GPUs, we did not attempt to achieve practical speedup during training on GPU. This limitation presents a potential direction for our future work.
We thank you again for the time and effort you've taken to participate in the review of our paper. If you have further questions, we are more than happy to discuss with you.
---
Rebuttal Comment 1.1:
Comment: Thank you, authors, for the detailed response feedback. After carefully considering your comments, I believe that the initial score accurately reflects the strengths of this work.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer xwUb, we appreciate your constructive comments and feedback! Thank you for your positive support! | null | null | Rebuttal 1:
Rebuttal: ## To All Reviewers:
We thank the reviewers for their constructive suggestions and in-depth analysis, which is helpful for our work. We are humbled by such a positive response, and we truly appreciate it.
We are delighted to note that all reviewers recognized the novelty of our research, found the idea **well-motivated**, **interesting**, and **novel** (Reviewers xwUb & SYnH), appreciated our contributions towards **improving efficiency** (Reviewers JUDh & SYnH), and acknowledged the **robust validation** (Reviewer SYnH). Additionally, the reviewer xwUb found our work **well-organized** and **well-written**.
In response to the reviewers' valuable feedbacks, we have made earnest efforts to address all raised concerns. A summary of our responses is as follows:
- Explained the results in Tables 3 and 4. (Reviewers xwUb)
- Reported the inference time. (Reviewers xwUb)
- Provided more insight for Table 3. (Reviewers JUDh)
- Explained our backbone (Reviewer SYnH)
- Added the results for kernel size of 3x3x3 on the BraTS dataset (Reviewer SYnH)
- Compared with recent lightweight networks (Reviewer SYnH)
Should there be any points that remain unclear or require further clarification, please do not hesitate to bring them to our attention. We are open to any additional feedback, comments, or suggestions, and we sincerely appreciate your continued engagement in enhancing the quality of our work. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Enabling Adaptive Agent Training in Open-Ended Simulators by Targeting Diversity | Accept (poster) | Summary: This paper applies quality diversity (QD) optimization (evolutionary method) to the problem of diverse task generation for (meta) reinforcement learning (RL). It argues that QD could be used in settings where an open-ended simulator’s parameterization is unlikely to produce tasks that are diverse in high-level features. The method works by handcrafting a set of high-level task features that are relevant to the learning process of the RL agent, then running QD to collect a set of diverse parameterizations that cover the feature space distribution well. A (meta) RL agent is then trained on tasks sampled from a distribution based on the set of QD optimised tasks. The paper’s experiments on GridNav, Alchemy, and Racing tasks show significant improvement in agent performance over existing baselines such as robust prioritised level-replay.
Strengths: Focuses on high-level features more relevant to downstream tasks that the meta-RL agent should learn to adapt in, rather than focusing on simulator parameters as in prior unsupervised environment design (UED) work.
By shifting the focus in task distribution design for meta-RL training, this paper showed significant improvements on representative existing UED solutions, on a diverse set of evaluation tasks (GridNav, Alchemy, Racing). The evaluations are well-designed with sufficient ablations.
Makes a connection between QD and UED for meta-RL training, which is novel to the best of this reviewer’s knowledge.
The paper is very clearly written and illustrated. Relevant related works are discussed and contributions of the work are put into appropriate context. Content is well self-contained despite introducing new methods and tasks.
Weaknesses: It is still necessary to hand-craft the high-level features, which needs expert knowledge (or at least quite high familiarity with downstream tasks) and can be heuristic.
Evaluation only used one meta-RL method (VariBAD). It would be useful to see performance of other meta-RL methods such as RL^2 on the DIVA task distribution.
Technical Quality: 4
Clarity: 4
Questions for Authors: Does DIVA depend on any parameterizations at all? It seems like it might be possible (and very useful) to try DIVA on “weakly” parameterized tasks such as those generated by prompting an LLM or, say Genie [1].
[1] Genie: Generative Interactive Environments. Bruce el al. https://arxiv.org/abs/2402.15391
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Limitations have been adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review our work and for appreciating its merits. We have addressed the noted weaknesses below, as well as their question about parameterizations.
> [W1] It is still necessary to hand-craft the high-level features, which needs expert knowledge (or at least quite high familiarity with downstream tasks) and can be heuristic.
We agree that this is a limitation of our work, which we discuss in Section 7 (Line 319). In our response to [W1] from Reviewer 7bxy, we have included discussion of an example work [1] where QD features are learned automatically; it is possible we can draw inspiration from works in this vein to further automatize DIVA.
[1] Ding, L., Zhang, J., Clune, J., Spector, L., & Lehman, J. (2024). Quality Diversity through Human Feedback: Towards Open-Ended Diversity-Driven Optimization. In Forty-first International Conference on Machine Learning.
> [W2] Evaluation only used one meta-RL method (VariBAD). It would be useful to see performance of other meta-RL methods such as RL^2 on the DIVA task distribution.
It is true that all evaluations are conducted using VariBAD the base meta-RL algorithm. However, we did perform some preliminary evaluations with RL2—which is the same as VariBAD, except (1) instead of using a VAE with a latent space, we directly pass the RNN hidden state to the policy, and (2) we backpropagate the policy loss through the RNN (VariBAD only applies the VAE loss)—however, we did not see a drastic difference in the relative performance—between DIVA and its baselines—and because of the RNN backpropagate via the policy loss, training is significantly slower (as originally noted in [2]).
Given these results, in absence of a performance differential, we decided on VariBAD, a state-of-the-art meta-RL algorithm. One reason we were drawn specifically to VariBAD is the harmony between the nature of task distributions, and VariBAD’s distributional nature. In DIVA’s case, task distributions are represented as a multivariate normal (some dimensions may be uniform) over a QD archive. We believe it may be possible for future works to exploit this connection; e.g. by using the location of a solution in the archive to ground the VariBAD latent space (a rough idea).
[2] Zintgraf, L., Shiarlis, K., Igl, M., Schulze, S., Gal, Y., Hofmann, K., & Whiteson, S. (2019, September). VariBAD: A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning. In International Conference on Learning Representations.
> [Q1] Does DIVA depend on any parameterizations at all? It seems like it might be possible (and very useful) to try DIVA on “weakly” parameterized tasks such as those generated by prompting an LLM or, say Genie [1].
This is a great question, and certainly a direction worth exploring. DIVA in its current iteration can work with these weakly parameterized tasks, so long as (1) we have an environment generator that can accept this “weak” parameterization, and (2) we have access to some mutation method that can operate over this weak parameterization. For example, if the weak parameterization is language, we need some kind of language augmentation function (which prior literature has certainly explored). It may be even more effective to work with in the embedding space itself, since the augmentation function would be both simpler & smoother than language augmentation; in this case we would be essentially using QD to perform prefix/embedding tuning. We are excited to see these kinds of directions pursued in follow-up works.
---
Rebuttal Comment 1.1:
Title: Comment
Comment: Thank you for the detailed rebuttal. I maintain my original rating. | Summary: This paper introduces DIVA, a technique for exploring the parameter space of parametrisable environments. The technique uses a variant of MAP-Elites to explore the environment parameter space, finding exemplar points spread across the parameter space, as measured with respect to some user provided features. The authors show that this generates a usefully diverse collection of environment instances by training a meta-learning agent on the generated levels and comparing it to a number of baselines.
Strengths: On the whole the paper is well written and clear.
The authors choose a sensible selection of baseline algorithms to compare against, giving the reader an understanding of the strengths of their approach.
Weaknesses: The primary weakness from my perspective is that domains that the experiments was conducted on are all very simple “toy” domains. In particular, these domains do not have all have intrinsically complex parameterisations, and for two of them the authors had to, in effect, obfuscate the parameter spaces so the algorithm had a challenge to work against. This leads one to worry that the authors’ results may not be representative of more realistic open-ended domains where the parameter space complexity may manifest in a different way.
Another weakness of the technique is the authors need to hand-select the features used by the algorithm, per domain. The authors mention that this could be automated, but the fact still stands that in this paper, quite extensive hand tuning and selection - as reported in the appendices - of the objectives was made to get their results. The impact of this technique is much more limited if feature sets need to be hand-tuned per domain, so without demonstrating that this is not the case, I think the authors do not demonstrate that this technique is likely to have wide impact.
I have a further query about the way that the VariBAD hyperparameters were tuned, below in the questions section.
Technical Quality: 2
Clarity: 3
Questions for Authors: L36-37: Nit: “a” refers to the singular, “autocurricula” is plural.
L74: Nit: Add “are” between “algorithms” and “in”.
L129-131: Sentence doesn’t parse. Maybe remove “but”, or perhaps something got accidentally removed?
L159-161: I was confused by the experimental approach here. The downstream task distribution was first used to train the meta-learner’s hyperparameters, and then these hyperparameters were used to compare all of the approaches. But isn’t this essentially training on the test set? It feels like that the realistic setting would be one where the meta-learner does not know about the downstream tasks, and it would be important to demonstrate DIVA works under these conditions. I can appreciate that the authors might want to try to “factor” the behaviour of the meta-learner from the behaviour of DIVA, but it seems like a strong assumption that this is possible or useful.
L245: Nit: Figures 8 & 9 are referenced before figure 7.
L278: “Their approach”. Who’s approach?
L306: Define “HRI”.
L312: “DIVA’s” -> “DIVA”.
L312: “incorporating” -> “incorporate”.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The primary limitation is the one regarding hand-tuned feature sets, which I have expanded on above, in the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s feedback and attention to detail. We have made all of the minor changes suggested under “Questions”, which will be present in the camera-ready draft. We address both major concerns below.
**(1) On the domains being “toy”**
> [W1] [...] domains [...] are all very simple [a] “toy” domains. In particular, these domains do not have all have [b] intrinsically complex parameterisations, and for two of them the authors had to, in effect, obfuscate the parameter spaces so the algorithm had a challenge to work against. [...] the authors’ results may [c] not be representative of more realistic open-ended domains where the parameter space complexity may manifest in a different way.
**(1a)** For academic institutions with limited computational resources, this set of domains presents challenge enough to usefully benchmark methods, while keeping training runs cheap enough that many experimental trials can be run—enabling us to achieve statistically significant results.
**(1b)** As discussed openly in Section 5, we recognize that the domains chosen require us to “obfuscate” the existing environment parameterization(s). We believe the reviewer may fundamentally misunderstand why adjusting the parameterization of these domains is necessary.
Most meta-RL/RL domains are released with a parameterization/generator that can produce levels of meaningful diversity. Especially for domains with complex dynamics, designing these are not so straightforward. Because DIVA-like approaches are not widely studied at present, most domain designers do carefully hand-craft task distributions so that researchers will use their domains.
As meta-RL approaches improve, and domains become more complex and open-ended, well-structured environment parameterizations will become increasingly expensive to implement by hand. Approaches like DIVA will enable learning on these domains, without requiring carefully hand-designing a structured parameterization and corresponding task generator. We believe that demonstrating the existence of DIVA-like methods that can handle ill-parameterized domains is a necessary step to inspire researchers to build more open-ended domains with unstructured parameterizations in the first place.
**(1c)** The main property that makes a parameter space “unstructured” is the low probability of producing meaningfully diverse levels. This is the guiding principle used to design the challenging parameterizations for our evaluation domains—which, as we have explained above in response to (1b), was a necessity, since most (if not all) current academic domains are released with convenient parameterizations. While we agree that “the parameter space complexity may manifest in a different way” for more complex open-ended domains, what will remain is the overarching principle of meaningfully diverse levels having a low probability of being generated. Without detailing some other specific property of open-ended domains we might have overlooked, this weakness is a “whataboutism” that can be applied in similar form to any set of finite empirical findings.
**(2) On hand-selected features**
> [W2] […] The authors need to hand-select the features used by the algorithm, per domain. […] The impact of this technique is much more limited if feature sets need to be hand-tuned per domain, so without demonstrating that this is not the case, I think the authors do not demonstrate that this technique is likely to have wide impact.
First, only a few features specified (a subset of those detailed for each domain in the Appendix) were used in experiments, and the final designs for each were guided more by intuition (e.g. looking at the distributions and covariance matrices) than any "extensive" experimental tuning. We also demonstrate with DIVA+ (Line 259) that the misspecification of features/objectives can be made up for by performing a small number of online evaluations like UED works.
We believe that this criticism also overlooks the “extensive hand tuning” that is currently required to design parameterizations for complex domains. It is not that other approaches are able to sidestep the necessity of “hand-tun[ing]” feature sets per domain—it’s that this tuning is buried within the environment generation logic, which enabling a conveniently structured parameterization.
The assumption of having access to some features of useful diversity is much weaker assumption than the assumption of having pre-existing generators that can produce levels across these same features of diversity. Consider—how does one design a structured parameterization/generator without first determining what the resulting level features should look like?
**Questions**
> [Q1] L159-161: [...] The downstream task distribution was first used to train the meta-learner’s hyperparameters, and then these hyperparameters were used to compare all of the approaches. But isn’t this essentially training on the test set? [...] I can appreciate that the authors might want to try to “factor” the behaviour of the meta-learner from the behaviour of DIVA, but it seems like a strong assumption that this is possible or useful.
We recognize the ambiguity our wording may have caused here. The meta-learner is in fact “tuned” on the structured parameterizations. As can be seen in Appendix C.2, only a few VariBAD hyperparameters are adjusted per domain, which mostly pertain to pragmatic considerations such as network structure, and RAM/storage considerations. Because the parameterizations we consider produce poor training levels, we validate that the meta-RL component works by running the agent on the structured generator. This is indeed to “’factor’ the behavior of the meta-learner from the behavior of DIVA” and was a pragmatic decision so that we could more confidently study the effects of our method over the baselines. Once the meta-learner configuration was determined for each domain, it was held fixed for each method evaluated.
---
Rebuttal Comment 1.1:
Comment: Important typo fix: under **Questions** [Q1]: "the meta-learning is in fact *not* tuned" (the *not* is missing in the current response, and we are unable to edit the response directly at this time).
---
Rebuttal 2:
Title: Additional Rebuttal Content
Comment: More on **(1a)**: GridNav is a standard navigation task which serves as a useful didactic domain (and serves this purpose in VariBAD [1] as well), the Racing domain is a control domain used to benchmark numerous UED works [2, 3], and symbolic Alchemy is a chemistry-inspired meta-RL domain that requires complex trial-and-error reasoning over numerous episodes, which nicely complements the navigation and control tasks.
More on **(2)**: We might also add that this is the first work to consider the combination of QD and meta-RL in this manner. In quality diversity (QD) literature, features are typically assumed to be pre-determined [4]. This pre-specification of features is indeed a limitation, but one that can be lifted in a similar fashion to QD works that learn features automatically [5]. Based on the contributions of our work, we decided it was appropriately out of scope to additionally consider the problem of learning features automatically.
[1] Zintgraf, L., et al. (2019, September). VariBAD: A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning. In International Conference on Learning Representations.
[2] Parker-Holder, J., et al. (2022, June). Evolving curricula with regret-based environment design. In International Conference on Machine Learning (pp. 17473-17498). PMLR.
[3] Jiang, M., et al. (2021). Replay-guided adversarial environment design. Advances in Neural Information Processing Systems, 34, 1884-1897.
[4] Pugh, Justin K. et al. (2016) “Quality Diversity: A New Frontier for Evolutionary Computation.” Frontiers in Robotics and AI 3:40.
[5] Ding, L. et al. (2024). Quality Diversity through Human Feedback: Towards Open-Ended Diversity-Driven Optimization. ICML. | Summary: The paper identifies the limitation that hand-crafting a sufficiently diverse set of simulated training tasks to bridge any significant sim-to-real gap is labor-intensive. It then proposes DIVA, a new evolutionary approach for generating diverse training tasks in the absence of well-behaved simulator parameterizations. The paper demonstrates how DIVA outperforms proposed baselines such as ACCEL, PLR, and DR.
Strengths: - The paper identifies a key limitation in current methods used to bridge the sim-to-real gap.
- The proposed approach is novel.
- There is a good spread of experimental results.
Weaknesses: - I think the presentation of the paper can be improved. For example, Figure 1 could include more descriptions about what is happening, such as defining E_S(theta) as the structured environment simulator and E_U(theta) as the unstructured environment simulator.
- In line 128, how was the number 80% chosen? And why is it described as “roughly”? More justifications for the choice of these parameters should be included.
Minor things
- Typo in line 184, “the final y location is determine by”
Technical Quality: 3
Clarity: 2
Questions for Authors: - Why does DIVA want to evolve a population of minimally diverse solutions from the original parameterization? Does DIVA assume that the original parameterizations are what humans care about? If so, doesn’t that mean we still need to ensure that the original parameterizations are handcrafted enough to be close to what humans care about?
- If the original parameterizations are chosen randomly or are not handcrafted to what humans might care about, how would DIVA perform?
- To what extent and when do we consider an environment simulator to have structured or unstructured parameterizations?
- In the GridNav experiment, why do we want to prevent the generation of diverse goals along the y-axis?
- Is there a hypothesis on why “the domain randomization over increasingly complex genotypes diminishes diversity in terms of goal location” (lines 187-188)?
- Line 188 claims that “DIVA, on the other hand, is able to capture this diversity.” However, doesn’t DIVA also decrease to the same level of percentage coverage as DR*? Why are there no confidence intervals for Figure 3a, like the others?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: - Yes, the paper sufficiently covers the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review of our work, and for engaging us with curiosity to better understand the paper. Below we address the two main weaknesses noted by the reviewer, and clarify certain aspects of the work in response to the reviewer’s questions.
> [W1] [...] The presentation of the paper can be improved. For example, Figure 1 could include more descriptions about what is happening [...]
We appreciate this constructive feedback. We have updated Figure 1 (see Visual A in the 1-page supplemental), which now includes more descriptions as suggested by the reviewer.
Additionally, we have taken other efforts to improve the presentation quality of the rest of the paper. These changes include adding algorithmic pseudocode to the Appendix (Visual E in the 1-page PDF) for the sake of clarity, a number of minor updates in wording for the sake of exposition clarity, switching out pixelated plots for raw vectorized PDFs for better image resolution, and some other minor aesthetic changes along these lines.
We welcome any other concrete suggestions the reviewer may have for improving the presentation.
> [W2] In line 128, how was the number 80% chosen? And why is it described as “roughly”?
We thank the reviewer for pointing out this ambiguity. By “roughly” we were referring to the nature of samples versus population: we set this threshold using the 80th percentile of the samples, which encapsulates roughly 80% of the population. We understand this wording is unclear, and have therefore clarified what we mean precisely in the manuscript.
> […] More justifications for the choice of these parameters should be included.
80% was chosen as an intuitive heuristic to balance the need for preserving the diversity contained within the DR samples used to initialize the archive (and the corresponding region that may be filled with useful mutations), and given a fixed archive size, maintaining as high a resolution as possible to enable the fastest propagation of solutions to and within the target region.
We have run experiments to demonstrate that any moderate setting of this value prevents the failure modes of either extreme—see Visual D in the 1-page PDF (which will be included in the camera-ready, along with relevant discussion). In Alchemy (left) we see that the failure mode preventing the generation of target solutions is when we set this too low—-i.e. where we take only a small part of the tail of the DR samples. In Racing (right) we see the opposite problem; when set too high, it has significantly reduced the number of target samples. In both cases, the any moderate settings of the hyperparameter seems to do the job; it’s really about preventing either of the extremes. We chose 80% to be on the conservative side, but from the experiments, it looks like a setting of around 50% might even be better (from the Racing results).
Additionally, we include robustness studies for other major hyperparameters in Visuals B, C, and F in the 1-page PDF (also for camera-ready). All attached figures correspond to Alchemy, but we can also provide similar ablations for Racing in the camera-ready draft. In Visual B we show robustness of final returns based on the mutation rate (Alchemy); in Visual C we see more QD updates is generally better, but our setting of 80k is not necessary to produce significant benefits over baselines; and in Visual F we see that while estimates for the normal parameters become less accurate with fewer sample features provided, DIVA is able to outperform baselines with as little as 5 downstream feature samples.
> [W3] Typo in line 184, “the final y location is determine by”
Fixed, thanks!
Questions
> [Q1] Why does DIVA want to evolve a population of minimally diverse solutions from the original parameterization? Does DIVA assume that the original parameterizations are what humans care about? If so, doesn’t that mean we still need to ensure that the original parameterizations are handcrafted enough to be close to what humans care about?
We believe there is a key misunderstanding here between the function of DIVA and the experimental setup. We choose parameterizations in our evaluation that do not provide the dials—so to speak—to tune the features that humans care about (like the dials on $E_S(\theta)$ in Figure 1—-updated version in 1-page PDF response, Visual A). And these challenging parameterizations are what produce the “minimally diverse” solutions with which the archive is initialized. DIVA is designed to work with parameterizations like these that are explicitly not designed to produce diversity that humans care about. DIVA overcomes these difficult parameterizations by using QD to populate an archive of solutions that are diverse along certain features—which are hand-designed in this work, but may be learned automatically in future work (e.g. [1], a citation we’ve added thanks to the suggestion of Reviewer 7bxy).
It is alternatively possible the reviewer might be asking if we need access to $E_S(\theta)$. DIVA does not need access to this generator/parameterization; all it needs is enough feature value samples from the downstream distribution to roughly estimate what our target distribution over the archive should be. It does not need the parameters that correspond to these scenarios. For example, if DIVA were tasked with generating house layouts for a house navigation task, it would need to know the target distribution for the number of rooms / the hallway width, etc. which could be determined from e.g. photos; but it would not need the parameters for constructing these sample houses in the simulator.
In short, it is the features that must be designed/specified to reflect what humans care about, not the parameterization. Most works assume the parameterization is well-behaved. We challenge this assumption, and replace it with an assumption we believe is more realistic for open-ended environment simulators.
---
Rebuttal 2:
Title: Additional Rebuttal Content (1)
Comment: > [Q2] If the original parameterizations are chosen randomly or are not handcrafted to what humans might care about, how would DIVA perform?
We believe this question likely reflects the same misunderstanding noted above; we refer the reviewer to our [Q1] response, and welcome any follow-up questions the reviewer may have on this point.
> [Q3] To what extent and when do we consider an environment simulator to have structured or unstructured parameterizations?
We introduce the idea of structured/unstructured parameterizations in Figure 1, and elaborate specifically on the unstructured parameterizations DIVA can work with starting on Line 88. We define structured parameterizations as ones that enable random sampling to produce diverse levels of interest, whereas increasingly unstructured parameterizations produce these useful levels with diminishing probability. These terms are not absolute, but are instead relative, and represent two poles of a continuum. Open-ended domains, with more degrees of freedom and greater complexity in environment dynamics, must either be carefully parameterized and provided with a sophisticated generator that can navigate these complexities (i.e. a structured parameterization), or they can be more flexibility parameterized with a simpler generator (an unstructured parameterization), where meaningful diversity is possible, but not guaranteed with high probability. DIVA can make use these more unstructured parameterizations, and some knowledge of some knowledge of what levels should look like, in order to produce an abundance of training levels that are diverse in these ways. Importantly, this saves the prohibitive effort of having to carefully craft structured parameterizations / generators for complex open-ended domains.
> [Q4] In the GridNav experiment, why do we want to prevent the generation of diverse goals along the y-axis?
We believe this may pertain to the same misunderstanding addressed in [Q1], so we refer the reviewer to that response, but we also address this question more extensively below.
The object of DIVA and the baselines are to produce meaningful diversity in the generated levels, and for the GridNav domain, this means producing goal diversity along both the x and y axis.
In our GridNav experimental setup, we inject complexity into the parameterization that makes it difficult to generate goal diversity along the y-axis. We design the parameterization in such a way that we can vary the complexity (unstructuredness) of the parameterization. As we increase the complexity, diversity along the y-axis becomes less and less likely. Our results in GridNav show that DIVA is best able to capture this diversity by (1) evolving levels with the expressed purpose of discovering and preserving levels of meaningful diversity (via the QD archive), and (2) avoiding agent evals like ACCEL/PLR, which are expensive, and doesn’t guarantee all diversity discovered is preserved.
> [Q5] Is there a hypothesis on why “the domain randomization over increasingly complex genotypes diminishes diversity in terms of goal location” (lines 187-188)?
This is by design (described in Line 178) rather than some emergent property requiring hypothesis, and is likely relevant to the misunderstanding addressed in [Q4]. Increasingly complex genotypes diminish diversity in terms of goal location by our design of the genotype; and we design the general genotype scheme in such a way that we can conveniently vary the complexity—in this case, the number of genes that must coordinate to produce diversity along the y-axis. The effect of this design is that diversity along the y-axis becomes less and less likely (with random genotypes) as the complexity increases, because more genes need to coordinate.
---
Rebuttal 3:
Title: Additional Rebuttal Content (2)
Comment: > [Q6] Line 188 claims that “DIVA, on the other hand, is able to capture this diversity.” However, doesn’t DIVA also decrease to the same level of percentage coverage as DR*? Why are there no confidence intervals for Figure 3a, like the others?
Figure 3a shows the ability of DR, DR*, and DIVA to “capture” diversity in generated levels with parameterizations of varying complexity. DR* reflects the upper bound of what a method like PLR can achieve with DR levels—it is the maximum diversity see by DR (i.e. the total number of unique levels seen over the entire generation process), whereas DR is just the final archive produced. DR has no memory mechanism to preserve diversity, whereas a method like PLR, with its priority buffer, can latch on to the diversity contained in DR* (but still may fail to preserve all discovered diversity).
The reviewer is correct that DIVA is shown to capture just as much as DR* at the most complex parameterization we tested. However, given the nature of the experimental design, this trailing off is destined to happen with a high enough complexity; it becomes statistically unlikely any algorithm to produce meaningful diversity. It was a purposeful decision to demonstrate that DIVA tails off in its ability to handle these highly challenging parameterizations. If we increased the number of update steps allowed for DIVA, or inserted a more sophisticated emitter algorithm (instead of MAP-Elites), DIVA would be able to benefit from this far more than DR* (the same trend would extend, but fall off just the same at a high enough complexity).
The takeaway from this plot is that QD is able to uncover and preserve diversity in the archive as the complexity increases better than DR (or our invented oracle DR*). This is also why we have not included confidence intervals—-the point is to show the trend; each point is just a single run of QD updates. Given the cleanness of the trend, we found it unnecessary to run more seeds for this plot, but we can do this for the camera-ready draft. Either way, we have clarified this point in the paper. Thanks for your feedback!
---
Rebuttal Comment 3.1:
Comment: Thanks to the authors for their detailed response and the work put into the new experiments. The new experiments do address many of the concerns I had. For the presentation of the paper, I hope the authors will put the explanations for many of the design choices (like what the authors wrote for rebuttals above) into the revised manuscript. I think that will significantly improve the paper's presentation. As such, I have increased my score.
---
Reply to Comment 3.1.1:
Comment: We appreciate the reviewer for engaging with our rebuttal and for helping strengthen our paper. We will follow through with the reviewer's feedback to include the explanations for the design choices (including the content contained in our rebuttal) in the camera-ready manuscript. This will be added along with the ablations and other updates noted, which have already been applied to our working manuscript draft. We thank the reviewer for updating their score, and are more than happy to address any remaining concerns. | Summary: This paper describes an approach for learning a QD-archive to be used as a proxy for samples of test environments, and shows that this results in improved performance in producing a set of meta-learning tasks over DR and UED baselines.
Strengths: The introduction of QD approaches into UED algorithms is a promising algorithm for improvement. The empirical results appear convincing, and the approach is quite natural.
Weaknesses: The comparison between DIVA and PLR/ACCEL is a bit of an apples-to-oranges comparison. Regret-based UED methods like PLR and ACCEL are meant to be making decisions under ignorance, where there is no information known about the target distribution. There are other UED approaches designed for the case where there is some information that is known, and thus it is a decision under risk. Specifically, it would be better to compare against SAMPLR, CLUTER, or DRED. Since SAMPLR requires simulator access, the fairest comparison would be against CLUTER or DRED.
[SAMPLR] Jiang, Minqi, et al. "Grounding aleatoric uncertainty for unsupervised environment design." _Advances in Neural Information Processing Systems_ 35 (2022): 32868-32881.
[CLUTER]Azad, Abdus Salam, et al. "Clutr: Curriculum learning via unsupervised task representation learning." _International Conference on Machine Learning_. PMLR, 2023.
[DRED] Garcin, Samuel, et al. "DRED: Zero-Shot Transfer in Reinforcement Learning via Data-Regularised Environment Design." _Forty-first International Conference on Machine Learning_. 2024.
That being said, the current results look to have more significant results than these other works (though it is hard to tell across domains), and this approach has not be tried in meta-learning before. It is important to note that UED has been used in meta-learning before, for instance in AdA.
[AdA] Team, Adaptive Agent, et al. "Human-timescale adaptation in an open-ended task space." _arXiv preprint arXiv:2301.07608_ (2023).
It occurs to me that DIVA + UED as discussed starting at line 259 could be used as an algorithm for decisions under ignorance, and would possibly be quite a good algorithm. It may be worth running the approach from 259, maybe without any test time data, on the traditional maze, F1, and bipedal walker environments and transfer tasks to check if it consistently outperforms existing methods.
It seems like the assumption about the probability of generating a useful level from the simulator underlying DIVA is similar to the assumption necessary for ACCEL or PLR. Both of these approaches should work if there is a ~0.01 % chance of generating a useful level. The boot-up time would just be a bit slower to get the initial buffer of levels.
The limitation of online agents evaluations is a real bottleneck for the UED approaches which DIVA avoids, but it is a bit of an odd comparison as DIVA generates the model once offline, and thus isn't aiming to be adaptive towards current agent performance. It seems like DIVA looses the benefits of adaptivity by completely removing online agent evaluations. I would be interested if a DIVA-like approach with limited online evaluations could keep the best of both.
In Figure 5d, "unique genotypes is a quite bad metric for diversity, as completely random levels would score quite well even though most random levels are quite qualitatively to each other.
### Clarity
In the abstract it is not clear to me what "well-behaved simulator parameterisations" means, what "unscalable flexibility" refers to, or what "ill parameterised simulators" means.
It would be good to have a citation on line 46 for how one could learn the axes of QD.
It's not immediately clear what "genotype" means, and it seems to be used in different ways on line 82 and 88. In the first case it may be more conventional to call it the "level generator" and in the second case it could be more conventional to call it the "level parameters".
Technical Quality: 3
Clarity: 2
Questions for Authors: How would this method compare to CLUTR or DRED?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to write such a thorough review, and for not only appreciating the merits of this work, but for identifying points of weakness to strengthen our manuscript. We have responded to each of the weaknesses listed and questions posed below, and look forward to further discussion.
> [W1] The comparison between DIVA and PLR/ACCEL is a bit of an apples-to-oranges comparison. [...] the fairest comparison would be against CLUTER or DRED.
>
> [Q1] How would this method compare to CLUTR or DRED?
We have added each of the suggested approaches (SAMPLR, CLUTR, and DRED) to the related works section, which will appear in our camera-ready draft. Our assessment of each of the individual works—see conclusions below, and **full assessments attached as "official comments"**—is that none are as appropriate a baseline choice as ACCEL, despite some similarities (especially DRED) to our problem setting.
**SAMPLR** We agree with the reviewer’s assessment that SAMPLR is less relevant for direct comparison to DIVA since SAMPLR requires access to the simulator itself (and the direct downstream sampling distribution, which we do not assume access to either). However, we believe SAMPLR’s relevance to the meta-RL setting (meta-RL potentially suffers even more from this bias) warrants mention in the related work section---an addition which will appear in the camera-ready draft.
**CLUTR** By training a VAE on levels sampled from the generator distribution, which lack greatly in diversity in DIVA’s setting, CLUTR would fail to learn a generator that produces phenotypically diverse levels. ACCEL, by performing mutations to existing levels, is able evolve its levels towards greater diversity over time, making it a stronger baseline for our setting. Due to its relevance to future work, we have cited and added discussion of this work to the manuscript, which will appear in the camera-ready draft.
**DRED** The biggest difference between DRED and DIVA’s settings is that DIVA does not assume access to the underlying level parameters, but rather only the feature values---and these alone are used to define the archive. This difference makes DRED suffer from the same issue as CLUTR when faced with the setting that DIVA operates in: if the parameter space cannot produce diverse levels for the VAE training, then the resulting model will produce levels just as lacking in diversity. Resampling a la PLR will have a similar effect as our PLR baseline. Thus, as was mentioned of CLUTR above, ACCEL remains a stronger baseline than DRED for this reason. That being said, DRED’s close resemblance to our setting and possible applicability to future work (like CLUTR), we have cited and added discussion to our manuscript, which will appear in the camera-ready draft.
> [W2] It occurs to me that DIVA + UED [...] could be used as an algorithm for decisions under ignorance, and would possibly be quite a good algorithm. It may be worth running [...] on the traditional maze, F1, and bipedal walker environments and transfer tasks to check if it consistently outperforms existing methods.
>
> [...]
>
> The limitation of online agents evaluations is a real bottleneck for the UED approaches which DIVA avoids, but it is a bit of an odd comparison as DIVA generates the model once offline, and thus isn't aiming to be adaptive towards current agent performance. It seems like DIVA looses the benefits of adaptivity by completely removing online agent evaluations. I would be interested if a DIVA-like approach with limited online evaluations could keep the best of both.
We would like to note that the DIVA+ algorithm (which demonstrating learning benefits when the archive is “misspecified”) is indeed a “DIVA-like approach with limited online evaluations”, so the reviewer is correct in identifying the promise of such a combination. And we agree that this kind of approach warrants more exploration; we include these preliminary results simply to showcase the promise of an approach of this nature as a potential future direction, and leave it to future work to both (1) provide a more principled integration of UED/QD (our DIVA+ algorithm is just one such combination), and to (2) produce more extensive results with such an approach on relevant domains, such as the ones the reviewer has listed.
> [W3] It seems like the assumption about the probability of generating a useful level from the simulator underlying DIVA is similar to the assumption necessary for ACCEL or PLR. Both of these approaches should work if there is a ~0.01 % chance of generating a useful level. The boot-up time would just be a bit slower to get the initial buffer of levels.
While the reviewer is correct in highlighting that this general assumption is shared between DIVA and ACCEL (and PLR), we demonstrate that DIVA is able to work with generators that produce smaller percentages of useful levels. In practice, the boot-up time for ACCEL (and PLR) is more than a bit slower. Because ACCEL must simulate the agent on each environment, and DIVA only requires rendering the first timestep, then ACCEL is $|\tau| \times H$ times slower if we wish to perform just as many evolutionary updates as DIVA (in practice rendering the first timestep to compute features may be slower depending on the environment, but the more general point still holds). Because DIVA can produce far more evolutionary updates in the same amount of time by avoiding simulating the agent on every level (and DIVA+ avoids this too by only simulating levels that constitute the final archive, after all the initial QD updates), DIVA can work more effectively with environments where generating a useful level is even rarer. This is a point we tried to get across in Section 3 (Problem setting), but for the camera ready we will also include the reasoning we’ve provided above—we thank the reviewer for prompting us to consider this point more closely in our discussion!
---
Rebuttal 2:
Title: Additional Rebuttal Content
Comment: > [W4] In Figure 5d, “unique genotypes[“] is a quite bad metric for diversity, as completely random levels would score quite well even though most random levels are quite qualitatively to each other.
We agree with the reviewer that unique genotypes is indeed a bad metric for diversity. We include this plot not to highlight diversity, but to make the same observation the reviewer notes (see Line 207)—-that despite DR (for example) producing more unique genotypes than DIVA, DIVA is able to produce greater meaningful diversity, which leads to better performance. We also note that in Line 208 we mistakenly referenced “Figure 5” (in general) instead of “Figure 5d”, and we have made this correction.
> [Q2] In the abstract it is not clear to me what "well-behaved simulator parameterisations" means, what "unscalable flexibility" refers to, or what "ill parameterised simulators" means.
We agree with the reviewer that the abstract should be self-contained, and that terminology should either be widely understood or defined in the abstract itself. We will modify these phrases in the camera-ready draft to be clear and unambiguous. We thank the reviewer for bringing our attention to these instances.
> [Q3] It would be good to have a citation on line 46 for how one could learn the axes of QD.
Great suggestion; we have added one such example [1] to our manuscript to be present in the camera-ready draft.
[1] Ding, L., Zhang, J., Clune, J., Spector, L., & Lehman, J. (2024). Quality Diversity through Human Feedback: Towards Open-Ended Diversity-Driven Optimization. In Forty-first International Conference on Machine Learning.
> [Q4] It's not immediately clear what "genotype" means, and it seems to be used in different ways on line 82 and 88. In the first case it may be more conventional to call it the "level generator" and in the second case it could be more conventional to call it the "level parameters".
The reviewer is correct that the first use of genotype in Line 82 is misleading; we will update to “level parameterization” as this is what we mean. In Line 88 we will use “level parameters” in as the reviewer suggests for clarity, but we will still introduce and use “genotype” as a shorthand/alternative for “level parameters”, in order to respect the connection to evolutionary approaches.
---
Rebuttal 3:
Title: SAMPLR, CLUTR, and DRED Full Assessments
Comment: > [SAMPLR] Jiang, Minqi, et al. "Grounding aleatoric uncertainty for unsupervised environment design." Advances in Neural Information Processing Systems 35 (2022): 32868-32881.
**SAMPLR** attempts to correct the “curriculum-induced covariate shift” (CICS)—with respect to the downstream task distribution—that results from learning an adaptive curriculum. SAMPLR mitigates this bias over aleatoric parameters while preserving the benefits of utilizing a curriculum (versus sampling directly from the downstream task parameters).
Assessment: We agree with the reviewer’s assessment that this approach is less relevant for direct comparison to DIVA since SAMPLR requires access to the simulator itself (and the direct downstream sampling distribution, which we do not assume access to either). However, we believe SAMPLR’s relevance to the meta-RL setting (meta-RL potentially suffers even more from this bias) warrants mention in the related work section---an addition which will appear in the camera-ready draft.
> [CLUTR] Azad, Abdus Salam, et al. "Clutr: Curriculum learning via unsupervised task representation learning." International Conference on Machine Learning. PMLR, 2023.
The authors of **CLUTR** argue that many UED methods are burdened by the difficulty of simultaneously learning the task manifold (implicitly through RL, for PAIRED-based methods) and the curriculum over these tasks. CLUTR disentangles the task representation learning from curriculum learning by first pretraining a task manifold with a VAE in an unsupervised manner, and then learning a curriculum over this (fixed) task manifold via maximizing regret.
Assessment: By training a VAE on levels sampled from the generator distribution, which lack greatly in diversity in DIVA’s setting, CLUTR would fail to learn a generator that produces phenotypically diverse levels. ACCEL, by performing mutations to existing levels, is able evolve its levels towards greater diversity over time, making it a stronger baseline for our setting. CLUTR may, however, be a useful future addition to a DIVA-like approach, e.g. by converting the archive to a learned manifold via a VAE. This could be potentially useful for storage considerations (i.e. if our archive is too large), or to extrapolate beyond levels that exist in the archive. Performing curriculum learning over this distilled archive would look something like our DIVA+ algorithm, which combines the benefits of DIVA (diversity) with UED (curriculum) approaches, but would have these storage/extrapolatory benefits. Due to its relevance to future work, we have cited and added discussion of this work to the manuscript, which will appear in the camera-ready draft.
> [DRED] Garcin, Samuel, et al. "DRED: Zero-Shot Transfer in Reinforcement Learning via Data-Regularised Environment Design." Forty-first International Conference on Machine Learning. 2024.
The authors introduce data-regularized environment design (**DRED**), which combines adaptive sampling (like UED) with a level generator that approximates p(x). DRED assumes access not directly to p(x), but to a set of level parameters X with which a VAE can be trained. Once the VAE is trained, PLR is used to produce scores for sampling (other details omitted here for simplicity).
Assessment: The biggest difference between DRED and DIVA’s settings is that DIVA does not assume access to the underlying level parameters, but rather only the feature values---and these alone are used to define the archive. The QD search is then tasked with producing samples that match the feature distribution (in our work, for both simplicity and sample efficiency, making assumes of either independent Gaussian/Uniform distributions). DIVA’s archive is initialized with random parameters from the parameter space; it never sees the parameters corresponding to the feature values. This difference makes DRED suffer from the same issue as CLUTR when faced with the setting that DIVA operates in: if the parameter space cannot produce diverse levels for the VAE training, then the resulting model will produce levels just as lacking in diversity. Resampling a la PLR will have a similar effect as our PLR baseline. Thus, as was mentioned of CLUTR above, ACCEL remains a stronger baseline than DRED for this reason. That being said, DRED’s close resemblance to our setting and possible applicability to future work (like CLUTR), we have cited and added discussion to our manuscript, which will appear in the camera-ready draft. We would like to note, however, that DRED’s absense in our original discussion was due to its very recent publication (uploaded to ArXiv in February 2024; presented at ICML in July 2024).
---
Rebuttal Comment 3.1:
Title: Response to Rebuttal by Authors
Comment: Thank you for the detailed response, the proposed changes do clarify the paper significantly. Unfortunately I don't see room to increase my score past "high impact on at least one sub-area", but I appreciate the work and do believe this paper should be accepted. | Rebuttal 1:
Rebuttal: We thank the reviewers for the time they have taken to engage constructively with our work. Reviewer _kFbG_ appreciates the “novel [...] connection between QD and UED for meta-RL training” we have developed, along with our work’s explicit “focus on [relevant] high-level features”—in contrast to the implicit assumptions of diversity along these features in prior works—and the resulting “significant improvements [on] well-designed” evaluations. Reviewer _7bxy_ finds the approach “natural”, and the results “convincing”. Reviewer _34nq_ believes the paper “identifies a key limitation” in existing literature and contains a “good spread of experimental results”. Reviewer _nCFY_ finds the paper “well written and clear”, and that a “sensible selection of baseline algorithms” are used for evaluation.
There are four main weaknesses in total that appear to be sticking points for Reviewers _34nq_ and _nCFY_, specifically. We have addressed each of these points at length in the individual responses, and summarize the holistic discussion of each point here.
**(1) Specifying features.** A commonly noted limitation among the reviewers, and one we address ourselves in Section 7 of the manuscript, is that the “axes of diversity must be specified” (Line 319). Reviewer _kFbG_ views DIVA’s explicit “focus on [relevant] high-level features” as a strength of our work, an aspect which contributes to our paper identifying a “key limitation” (Reviewer _34nq_) in existing literature. As we point out in a number of individual responses, while QD literature often assumes access to these high level features, some works (e.g. [1]) are able to determine these automatically. We view this as an interesting avenue for future work, but outside the scope of this paper, which, as previously noted is already a “novel [...] connection between QD and UED for meta-RL training” (Reviewer _kFbG_). For Reviewer _nCFY_ alone it is a major sticking point from which they conclude (in addition to other human involvement in feature/objective selection) that we have “not demonstrate[d] that this technique is likely to have wide impact”. In response, we remind the reviewer that, despite explicitly making this new assumption ourselves, we are removing a much stronger assumption implicit in prior works:
> The assumption of having access to some features of useful diversity is much weaker assumption than the assumption of having pre-existing generators that can produce levels across these same features of diversity. Consider—how does one design a structured parameterization/generator without first determining what the resulting level features should look like?
See our full response to Reviewer _nCFY_ for more discussion on this point.
**(2) Presentation.** Reviewer _34nq_ expresses concern over the presentation of the paper as their main weakness. Specifically, the example they give for where clarity can be improved is Figure 1. We update Figure 1 along the lines of the reviewer’s suggestions (see **Visual A in the attached PDF**), add algorithmic pseudocode to the Appendix (see **Visual E in the attached PDF**) to further elucidate our approach, and make some other minor formatting changes to improve the presentation for the camera-ready draft. It is worthwhile noting that beyond typos and updates to specific cases of ambiguous wording, no other major presentation concerns were noted by Reviewer _34nq_ or the other reviewers. Reviewer _nCFY_ says “on the whole the paper is well written and clear”, and Reviewer _kFbG_ finds the paper “very clearly written and illustrated”.
**(3) Hyperparameters.** Reviewer _34nq_ expresses confusion over one of our hyperparameter choices (setting an archive bound based on 80% of samples), and expresses more generally that these decisions are not better justified in the submitted draft. We clarify our wording and justify our decision for the 80% bound hyperparameter, and provide a robustness study on both Alchemy and Racing domains to verify the intuition that any moderate setting of this hyperparameter avoids failure modes (see **Visual D in the attached PDF**). We additionally provide studies for other hyperparameter, demonstrating DIVA’s robustness to their settings (see **Visuals B, C, and F**). For more details, see our response to Reviewer _34nq_ [W2].
**(4) Evaluation domains.** Reviewer _nCFY_’s main sticking point (unique among the reviewers; who otherwise find the evaluation sufficient and convincing) is that the domains are “toy”, specifically because we “obfuscate” the parameter spaces to produce challenging parameterizations. We first dispute the notion that the domains themselves are overly toy, both in terms of the unique challenges they pose for methods in our problem setting, and according to standards of academic research in meta-RL and UED. We then explain why it was necessary to reparameterize the tasks—namely, that most domains come pre-equipped with a parameterization that can generate levels of meaningful diversity. For open-ended domains of the future, with more complex dynamics, well-structured environment parameterizations will not be as feasible to implement by hand. Approaches like DIVA will enable learning on these domains, without requiring carefully hand-designing a structured parameterization or task generator. And crucially, we believe that demonstrating the existence of DIVA-like methods that can handle ill-parameterized domains is a necessary step to inspire researchers to build more open-ended domains with unstructured parameterizations in the first place.
[1] Ding, L., Zhang, J., Clune, J., Spector, L., & Lehman, J. (2024). Quality Diversity through Human Feedback: Towards Open-Ended Diversity-Driven Optimization. In Forty-first International Conference on Machine Learning.
Pdf: /pdf/3522508fd6115c42565a9a19d0aba8f679343d7d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Protein-Nucleic Acid Complex Modeling with Frame Averaging Transformer | Accept (poster) | Summary: The authors propose am method for predicting contacts between proteins and aptamers based on frame averaging transformers. They showcase their methods on contact prediction and unsupervised aptamer screening, showing improvements over some baselines. The authors also compare to RoseTTAFoldNA, which seems to outperform their method.
Strengths: - Frame averaging is a very interesting idea (published in another venue before) and applying it to this specific problem seems to be very promising
- The presentation is very clear and the paper is well written
- The problem addressed (aptamer screening) is very important and practically relevant
- The authors compare to a large number of different architectures (see Table 4)
Weaknesses: Comparing to a pre-trained structure prediction method is clearly something to be explored well, and the authors do that in section 4.4 (with additional results with AF3 in the appendix). At the same time, this part seems to me not very well developed in the paper. I am especially confused by Figure 3, which the authors comment with "[..] where FAFormer can achieve comparable performance to RoseTTAFoldNA using unbounded structures." Maybe I am not understanding something but the right hand side of the plot seems to clearly indicate that at least for RNAs, RoseTTAFoldNA strongly outperforms the authors' method.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Could the authors expand section 4.4 and discuss the combination of Figure 3 and Table 5?
- Do the times in Table 6 include MSA search for Rosetta? If the protein is unchanged, could you make it more efficient by doing the MSA search only once?
- Could the authors expand a bit on possible future research (e.g. applying the same method to other screening tasks)?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Authors discuss some scientific limitations. I do not foresee any negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ```
Weakness: Comparing to a pre-trained structure prediction method is clearly something to be explored well, and the authors do that in section 4.4 (with additional results with AF3 in the appendix). At the same time, this part seems to me not very well developed in the paper. I am especially confused by Figure 3, which the authors comment with "[..] where FAFormer can achieve comparable performance to RoseTTAFoldNA using unbounded structures." Maybe I am not understanding something but the right hand side of the plot seems to clearly indicate that at least for RNAs, RoseTTAFoldNA strongly outperforms the authors' method.
```
**Ans**: We apologize for any confusion and will make this clearer in our updated manuscript. Our claim is based on the F1 scores, which show better performance for protein-DNA complexes (0.103 vs 0.087) and comparable performance for protein-RNA complexes (0.108 vs 0.12). It should be noted that the number of test protein-RNA complexes from RoseTTAFoldNA is limited (16 complexes), which might not be able to comprehensively assess our method's performance.
```
Question: Could the authors expand section 4.4 and discuss the combination of Figure 3 and Table 5?
```
**Ans**: Aligning Figure 3 with Table 5, we observe that while FAFormer does not achieve higher performance than RoseTTAFoldNA in terms of contact map prediction, it significantly outperforms RoseTTAFoldNA in aptamer screening tasks. We attribute this to two main reasons:
- RoseTTAFoldNA as a foundational structure prediction model is optimized for generally predicting the protein-RNA complex structure, which might bias its performance on some specific protein targets. For example, it can achieve good performance on proteins GFP and HNRNPC but fails for NELF.
- For the contact map prediction comparison, the protein-RNA test set used by RoseTTAFoldNA is limited (16 complexes), which may not comprehensively evaluate the performance of FAFormer.
All these discussions will be added to our revised manuscript.
```
Question: Do the times in Table 6 include MSA search for Rosetta? If the protein is unchanged, could you make it more efficient by doing the MSA search only once?
```
**Ans**: The times reported in Table 6 include the time required for searching MSAs. For the screening task, RoseTTAFoldNA only needs to search MSAs for protein targets once. However, it still needs to search MSAs for each RNA sequence individually. Besides, the forward process of RoseTTAFoldNA is computationally expensive because the inclusion of MSAs results in a large input sequence matrix. We will add this discussion to our revised manuscript.
```
Question: Could the authors expand a bit on possible future research (e.g. applying the same method to other screening tasks)?
```
**Ans**: For future applications, the proposed paradigm for aptamer screening can be extended to other modalities, such as protein-small molecules and antibody-antigen. Moreover, the strong correlation between contact prediction and affinity estimation demonstrated in our paper can guide future model development. Specifically, this correlation suggests promising directions for designing new objectives and collecting datasets that better capture the nuances of molecular interactions.
Regarding the architecture, FAFormer introduces a novel approach to equivariant model design by leveraging the flexibility of Frame Averaging (FA). This idea opens up numerous possibilities for future research, including exploring different ways to integrate FA with various neural network architectures.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing my concerns. My score (accept) remains the same.
---
Reply to Comment 1.1.1:
Comment: Thank you again for providing valuable comments on our paper! All the suggestions will be added to our updated manuscript. | Summary: This paper mainly focuses on protein-nucleic acid contact prediction and unsupervised aptamer virtual screening. The latter is based on an unsupervised learning approach by predicting the contact maps. An equivariant architecture that integrates frame averaging and transformer blocks is proposed for this task. Experiments on the two tasks demonstrate the superiority of the proposed architecture compared with other geometric deep learning models.
Strengths: - The proposed architecture is effective on the contact map prediction tasks. The tasks are novel and not well-studied in the community of machine learning.
- The unsupervised learning approach that predicts contact maps is effective on aptamer screening tasks. This perspective is novel.
- The authors have provided codes and reproducibility can be ensured.
Weaknesses: - The novelty of the proposed architecture is limited. The proposed architecture simply combines frame averaging and transformer blocks. The idea that builds the attention module based on SE(3)-invariant features is proposed in Equiformer. The dataset and benchmark track may be more suitable for this paper.
- The expressiveness of the proposed architecture is demonstrated in the contact prediction tasks. More experiments on more general tasks are needed to further show the effectiveness of the proposed architecture, such as protein-protein docking and others.
Technical Quality: 2
Clarity: 3
Questions for Authors: See the above weaknesses.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have pointed out the limitations due to only modeling certain atoms in the related tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ```
Weakness: The novelty of the proposed architecture is limited. The proposed architecture simply combines frame averaging and transformer blocks. The idea that builds the attention module based on SE(3)-invariant features is proposed in Equiformer. The dataset and benchmark track may be more suitable for this paper.
```
**Ans**: We respectfully disagree with the statements and would like to emphasize the contributions of our paper:
First, this is the first work designing a new equivariant Transformer based on the Frame Averaging (FA). Instead of simply combining them, each module is **dedicatedly integrated** with FA, which has **effectively enhanced model performance**, as demonstrated in our ablation study:
- The local frame edge module focuses on local spatial context by constructing the frames on the point cloud centered around each node;
- The biased MLP attention module applies FA to enable equivariant multi-head attention on the geometric features;
- Global frame FFN extends the FFN by incorporating geometric information within node representations using FA, allowing the attention to be conducted in a geometry-aware manner.
Second, the way FAFormer achieves equivariance is **completely different** from Equiformer which relies on spherical harmonics operations. As discussed in the Introduction (*Lines 32-38*), models like Equiformer **compromise efficiency** due to the complexity of irreducible representations, as shown in our efficiency comparison (*Appendix B.2*). We believe that incorporating FA within the model opens new possibilities for designing equivariant architectures in this domain.
Third, besides the architectural contributions, we propose **a new paradigm** **for unsupervised aptamer screening**. This paradigm connects contact map prediction and affinity estimation between two molecules, going beyond simply collecting datasets and constructing benchmarks.
```
Weakness: The expressiveness of the proposed architecture is demonstrated in the contact prediction tasks. More experiments on more general tasks are needed to further show the effectiveness of the proposed architecture, such as protein-protein docking and others.
```
**Ans**: The contact map prediction result between proteins is presented in our paper (*Table 3*). Besides, we have provided results on **binding site prediction** in *Appendix E.1*. This task solely takes a protein as input and aims to predict the NA-binding residues on the protein, which is a node-level prediction task.
To further demonstrate the performance of FAFormer, we here present its performance on two protein understanding tasks, including:
- Fold prediction: This task aims to predict the fold class (a total of 1195 classes) for each protein, with three different test sets (Fold, Family, and Super-Family). Accuracy is used as the evaluation metric.
- Reaction prediction: This task aims to predict the class of reactions for a given protein catalyzes (a total of 384 classes). Accuracy is used as the evaluation metric.
We follow the experimental settings in ProteinWorkshop[1], including dataset splits and input features (Ca-only). The statistics of the datasets are shown below:
| | Fold | Reaction |
| --- | --- | --- |
| #Train | 12.3K | 29.2K |
| #Valid | 0.7K | 2.6K |
| #Test | 1.3/0.7/1.3K | 5.6K |
The comparison results are presented below from which we can observe the best performance of FAFormer across most of the tasks:
||Fold(Fold)|Fold(Family)|Fold(Superfamily)|Reaction|Average|
|---|---|---|---|---|---|
|SchNet|0.2071|0.7670|0.2375|0.5894|0.4502|
|GearNet|0.3090|0.9340|0.4465|0.7814|0.6177|
|EGNN|0.2577|0.9193|0.3568|0.6578|0.5479|
|GCPNet|**0.3276**|0.9359|0.4110|0.6697|0.5860|
|TFN|0.2512|0.9188|0.3426|0.6922|0.5512|
|FAFormer|0.2451|**0.9661**|**0.4883**|**0.7970**|**0.6241**|
Due to the limited time and computational resources, we can only show the results of these two tasks. More benchmarking results will be completed and added to the updated manuscript.
[1] ICLR2024-Evaluating representation learning on the protein structure universe
---
Rebuttal Comment 1.1:
Comment: Thank you again for your hard work in reviewing our paper!
We hope you've had a chance to review our responses to your comments. Please let us know if you have any further questions or concerns. We greatly appreciate your feedback and are committed to clarifying the model innovation and the evaluation of the additional benchmarks. | Summary: The authors propose a novel equivariant model, FAFormer, which leverages the frame averaging operation as an integral geometric component within the Transformer. The authors prove the invariance and equivariance of the architecture. They further conduct experiments showing that FAFormer performs well in contact map prediction and could serve as a strong binding indicator for aptamer screening.
Strengths: - FAFormer is novel in incorporating frame averaging within each layer of the Transformer. The FAlinear module provides a novel method for creating invariant models.
- Empirical results demonstrate the superior performance of FAFormer on contact map prediction and aptamer screening tasks. FAFormer excels in screening aptamers and outperforms AlphaFold3.
- The authors provide a detailed proof of the invariance and equivariance property.
Weaknesses: - The F1 score for contact map prediction tasks is about 0.1, which is very low. I question whether this score can be used to judge which model is better. The model potentially performs worse due to separate modeling of nucleic acid and protein, the linear prediction head, and the difficulty of the pairwise prediction task. I suggest involving other node-level and graph-level prediction tasks for comparing FAFormer with other equivariant neural networks.
- The main idea of the model is confusing. If we already know the coordinates of the protein and nucleic acid, why do we need further encoding and prediction for contact prediction, instead of directly counting the contacts?
Technical Quality: 3
Clarity: 2
Questions for Authors: - The module structures with formulas are a bit difficult to follow. Also, could you provide more intuition on the design of these modules?
- Since we are constructing a KNN graph and always leveraging the coordinate X and edge E information, why should the model be called a “former” instead of a GNN, which could be misleading?
- As far as I know, RosettaFoldNA is used mainly for protein and NA structure prediction. However, FAFormer uses coordinate information as input. Is that a fair comparison?
- I am curious about the efficiency of FAFormer compared with other equivariant NNs. Could you provide some analyses?
- During experiments, do other methods include the same featurization? And are you training all models from scratch?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The work provides an innovative framework, FAFormer, and performs well on contact map prediction and aptamer screening tasks. However, the experimental setting is not convincing enough and requires refinement.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ```
Weakness: The F1 score for contact map prediction tasks is about 0.1, which is very low. I question whether this score can be used to judge which model is better. I suggest involving other node-level and graph-level prediction tasks for comparing FAFormer with other equivariant neural networks.
```
**Ans**: We would like to clarify that no matter the contact map prediction performance, the **main focus** of this paper is unsupervised aptamer screening. Contact map prediction is proposed as a prerequisite task due to its strong correlation with the screening task. While contact map prediction is challenging due to sparse labels, all comparisons are fair, with the same input features and learning pipelines. For benchmarking:
- We have provided results on **binding site prediction** in *Appendix E.1*. This task predicts the NA-binding residues for a given protein, which is a **node-level task**. The metrics exceed 0.4 since this task is much easier than the contact map prediction.
- The **aptamer screening task** is a **graph-level task** aimed at retrieving the positive RNAs for a protein.
Moreover, we present FAFormer’s performance on two protein-related tasks:
- Fold prediction: Predict the fold class (1195 classes) for protein, with three test sets (Fold, Family, and Superfamily). Accuracy is the evaluation metric.
- Reaction prediction: Predict the reaction class catalyzed by a given protein (384 classes). Accuracy is the evaluation metric.
Following the experimental settings of ProteinWorkshop[1], including dataset splits and Ca-only input features, the comparison results are shown below. FAFormer demonstrates the best performance across most tasks:
||Fold(Fold)|Fold(Family)|Fold(Superfamily)|Reaction|Average|
|---|---|---|---|---|---|
|SchNet|0.2071|0.7670|0.2375|0.5894|0.4502|
|GearNet|0.3090|0.9340|0.4465|0.7814|0.6177|
|EGNN|0.2577|0.9193|0.3568|0.6578|0.5479|
|GCPNet|**0.3276**|0.9359|0.4110|0.6697|0.5860|
|TFN|0.2512|0.9188|0.3426|0.6922|0.5512|
|FAFormer|0.2451|**0.9661**|**0.4883**|**0.7970**|**0.6241**|
Due to limited time and computational resources, we only show these two tasks' results. More results will be completed and added to the updated manuscript.
[1] ICLR2024-Evaluating representation learning on the protein structure universe
```
Weakness: If we already know the coordinates of the protein and nucleic acid, why do we need further encoding and prediction for contact prediction, instead of directly counting the contacts?
```
**Ans**: We use the individually **predicted structures** of proteins/nucleic acids during the evaluation (*Lines 207-209*), and all the input structures are **decentralized** (*Line 597*), meaning the coordinates are moved to the zero centers to avoid label leakage. Consequently, the distances between two input structures cannot be used to directly indicate the complex's contacts.
```
Question: Could you provide more intuition on the design of these modules?
```
**Ans:** FA is a general framework that endows a given encoder with equivariance, allowing for the flexible design of equivariant modules. Using FA as a module rather than a model wrapper also avoids an 8x increase in computation. We are glad to elaborate more on each module’s intuition:
- Local Frame Edge Module: In biomolecules, atoms interact through chemical bonds and electronic forces. Embedding these pairwise relationships allows the model to **represent these intricate dependencies** explicitly. We **differentiate the spatial context** around each node by considering the local point cloud centered on each target node. FA embeds the directed vectors between source and target nodes, capturing the direction of interactions.
- Biased MLP Attention Module: The attention mechanism is extended to include edge representations, biasing the attention map to **prioritize/deprioritize certain weights** based on interactions. Node coordinates are also updated to **simulate conformational changes** during molecular interactions, which is crucial for optimal binding.
- Global Frame FFN: We extend the FFN by incorporating FA to update representations in a geometry-aware context. The attention between two atom representations, combined with coordinate information, **functions similarly to distance calculations**.
We will add these discussions to the revised manuscript.
```
Question: Why should the model be called a “former” instead of a GNN, which could be misleading?
```
**Ans**: The core module of FAFormer is the biased attention module, which is the primary reason for considering our model a Transformer. We limit the attention to k-nearest neighbors to reduce the computation, which is commonly used in previous geometric Transformers.
In the paper (*Lines 90-94*), we have summarized that FAFormer is a hybrid of GNN and Transformer, as the edge module functions similarly to GNN aggregation.
```
Question: As far as I know, RosettaFoldNA is used mainly for protein and NA structure prediction. However, FAFormer uses coordinate information as input. Is that a fair comparison?
```
**Ans**: Besides the input sequence, RosettaFoldNA utilizes **searched MSAs** and **protein structure templates** as input. The protein structure templates are the coordinates of the MSAs. In contrast, the input structures for FAFormer are predicted solely based on input sequences.
```
Question: I am curious about the efficiency of FAFormer compared with other equivariant NNs.
```
**Ans**: We have provided an analysis in *Appendix B*, which includes computational complexity and wall-clock time comparisons. In summary, our proposed FAFormer demonstrates greater efficiency than spherical harmonics and FA-based Transformers.
```
Question: Do other methods include the same featurization? And are you training all models from scratch?
```
**Ans**: Yes, all the baseline models use the same features and are all trained from scratch. The hyperparameter and training details are provided in *Appendix A*.
---
Rebuttal Comment 1.1:
Comment: Thank you again for your hard work in reviewing our paper!
We hope you've had a chance to review our responses to your comments. Please let us know if you have any further questions or concerns. We greatly appreciate your feedback and are committed to addressing any potential issues. | null | null | Rebuttal 1:
Rebuttal: We appreciate the reviewers for noting that we propose a novel model (DGm4,npWJ) to address a meaningful problem (DGm4,qqru,npWJ) with a comprehensive evaluation (DGm4,npWJ). We further summarize our key contributions as follows:
1. We explore a new angle to conduct aptamer screening in an unsupervised manner by leveraging the strong correlation with the contact map prediction task.
2. We propose a new equivariant Transformer architecture, FAFormer, by integrating Frame Averaging (FA) within each module. FA as an integral component offers the flexibility to design expressive and equivariant modules, highlighting a new possibility for geometric encoder design in this domain.
3. We construct three protein complex datasets (Protein-RNA/DNA/Protein) and five aptamer datasets to evaluate the models. Our proposed architecture achieves SOTA performance over baselines, including RoseTTAFoldNA and AlphaFold3.
Additionally, two protein understanding tasks are included to further demonstrate the effectiveness of FAFormer (DGm4,qqru). We thank all the reviewers for their valuable comments, and the corresponding refinements will be added to our updated manuscript. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Simple and Adaptive Learning Rate for FTRL in Online Learning with Minimax Regret of $\Theta(T^{2/3})$ and its Application to Best-of-Both-Worlds | Accept (poster) | Summary: This paper provides a new adaptative learning rate framework for hard problems, called Stability-Penalty-Bias matching learning rate (SPB-matching). Using this SPB-matching learning rate, the paper proposes a Best-of-both-worlds (BOBW) algorithm framework for hard online learning problems. It not only achieves simultaneous optimality in the stochastic and adversarial regimes but also achieves the MS-type bound in the adversarial regime with a self-bounding constraint. The paper further shows the utility of their frameworks by studying two hard problems: partial monitoring and graph bandits.
Strengths: - **Clear structure**: The paper is well-organized, starting with the establishment of the framework for the learning rate, followed by the introduction of an algorithmic framework based on this learning rate, and finally exploring two applications of their frameworks.
- **Relatively comprehensive study**: The paper considers both the stochastic and adversarial regimes, as well as an intermediate regime termed the adversarial regime with a self-bounding constraint. It provides algorithms and theoretical guarantees, along with specific parameter choices in Algorithm 1 for two particular problems.
Weaknesses: - **Need more background introduction**: The authors should consider including an optimization problem aligned with (1) to better illustrate the hard problems. Additionally, more explanation is needed for the terms $z_t, h_t, u_t$ in the introduction or preliminaries sections to enhance understanding.
- **Strong assumptions**: Although the assumptions in Theorem 7 are checked for two specific problems, they are quite strong and may not hold for many hard problems. The authors should add more discussion about these assumptions, explaining the meaning of each assumption and identifying scenarios in which these assumptions hold.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Can the authors provide more explanations on the terms $z_t$, $h_t$, and $u_t$?
- On page 5, line 146, the paper assumes access to $\hat{h}_t$, which upper bounds $h_t$. Why is this always the case in reality?
- Why does Theorem 7 not depend on $p_0$?
- Why are $z_t$ and $u_t$ chosen as they are in the two examples? What is the practical meaning behind this choice? How should $z_t$ and $u_t$ be decided for other examples?
- Typo: In Lemma 5, line 163, it should be $J \in \mathbb{N}$ instead of $j \in \mathbb{N}$.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your valuable time and detailed review.
Below are our responses to the review.
> The authors should consider including an optimization problem aligned with (1) to better illustrate the hard problems.
Thank you for the comment regarding optimization problems.
But, we may not understand the reviewer's intent, so if you have time, we would appreciate it if you could be more specific about what optimization problems should be included.
> Additionally, more explanation is needed for the terms $z_t, h_t, u_t$ in the introduction or preliminaries sections to enhance understanding.
Thank you for your suggestion.
We will add an intuitive explanation of $z_t$ and $h_t$ in the introduction, along with an example in the expert setting.
As for $u_t$, it is a parameter that depends on the problem setting, and we provided examples in the sections on partial monitoring and graph bandits in the main text.
> Although the assumptions in Theorem 7 are checked for two specific problems, they are quite strong and may not hold for many hard problems. The authors should add more discussion about these assumptions, explaining the meaning of each assumption and identifying scenarios in which these assumptions hold.
Thank you for your comment. From a high-level perspective, these assumptions hold even for partial monitoring, which includes many sequential decision-making problems as special cases, as confirmed in Section 5. In this sense, we do not consider them to be strong assumptions.
Below, we provide a detailed explanation of each assumption:
Assumptions (i) and (ii) naturally arise in the analysis of FTRL. In particular,
Assumption (i): This is very standard because this is satisfied if we use an unbiased estimator $\hat{\ell}$. For example, it can be found in Chapter 11 of Lattimore and Szepesvari's book [36].
Assumption (ii): This is an upper bound on the stability term of FTRL, which can be controlled if the magnitude of the loss estimator is bounded favorably, and it is standard assumption in the analysis of bandit algorithms. For example, it can be found in Chapter 29 of Lattimore and Szepesvari's book [36].
Assumption (iii): This can be satisfied by ensuring the stability of the output of FTRL and has been investigated in various settings, for example, [55,26,28] and (Bubeck et al. 2018).
For these reasons, Assumptions (i), (ii), and (iii) are considered relatively mild.
The second condition in Eq. (14) has been used in existing research on best-of-both-worlds [26,28]. The first condition in Eq. (14) is new but can be seen as a natural extension of the second condition in Eq. (14) to problems with the minimax regret of $\Theta(T^{2/3})$.
- Bubeck et al. Sparsity, variance and curvature in multiarmed bandits. ALT 2018.
> On page 5, line 146, the paper assumes access to $\hat{h}_t$, which upper bounds $h_t$. Why is this always the case in reality?
Thank you for your valuable comment to improve the paper.
From Assumption (iii) we have $h_t \leq c h_{t-1}$ for some constant $c$.
Hence if we set $\hat{h}\_t \leftarrow c h_{t-1}$, we have $h_t \leq \hat{h}\_t$.
Note that $h\_{t-1}$ can be calculated from the information available at the end of round $t-1$, so it can be used when determining $\beta_t$.
In the revised version, we will include this discussion.
> Why does Theorem 7 not depend on $p_0$?
The parameter $p_0$ is a parameter that should be appropriately determined according to the problem setting.
We design $p_0$ in Algorithm 1 to satisfy Assumption (ii). For example, in partial monitoring, we set $p_0 = 1/k$, and in graph bandits, we set $p_0 = u$ (see Line 265).
To improve the current text, we will revise the statement in Theorem 7 to, "If Algorithm 1 satisfies Assumptions (i)--(iii), then its regret is bounded as ...".
> Why are $z_t$ and $u_t$ chosen as they are in the two examples? What is the practical meaning behind this choice? How should $z_t$ and $u_t$ be decided for other examples?
The values of $z_t$ and $u_t$ are determined by the problem setting and the regularizer.
In general, $u_t$ is chosen to satisfy the condition for stability (the assumption of Lemma 14 in our paper). Controlling the magnitude of the loss estimator by adding appropriate exploration to manage the stability term is common in the literature of FTRL in bandit problems (e.g., [8,26,54] and [36, Chapter 27]).
Regarding $z_t$, it is difficult to provide a general determination strategy within the limited space, as it varies depending on the form of the adaptive bounds that the algorithm designer aims to achieve. However, in the context of best-of-both-worlds bounds, it is common for $z_t$ to be determined based on the output of FTRL, $q_t$, to satisfy conditions similar to Eq. (14) in Theorem 2 [61,62,55,26,28].
---
Rebuttal Comment 1.1:
Comment: Thank you for your response to my comments.
The explanation provided by the authors addresses my concerns about the strong assumptions, so I will raise my score to 6. I hope the authors will include an intuitive explanation of $z_t$ and $h_t$ in the revised version. | Summary: The paper aims to develop a new adaptive learning rate framework for the Follow-the-Regularized-Leader (FTRL) algorithm that addresses online learning problems with a minimax regret of $\Theta(T^{2/3})$.
It specifically targets problems with indirect feedback, such as partial monitoring and graph bandits, and demonstrates the efficacy of the proposed framework in improving Best-of-Both-Worlds (BOBW) regret upper bounds.
Strengths: 1. **Innovative Approach:** The SPB-matching learning rate is a novel contribution that simplifies the design of adaptive learning rates for complex online learning problems. It has the potential to be applied to other settings.
2. **Unified Framework:** It successfully unifies the BOBW guarantee for hard problems, representing a good theoretical advancement. Its application to partial monitoring and graph bandits provides tangible improvements over existing methods.
Weaknesses: 1. **Practical Consideration:** The effectiveness of the SPB-matching framework depends on the proper tuning of parameters, which might limit its practical applicability without further optimization techniques. Furthermore, there are no experiments, as a result of which, it is unclear how the proposed method works in practice, even in the simulation.
2. **Clarity and Accessibility:** The paper is difficult to follow due to its technical nature. Some equations and results are presented without sufficient explanation, and the extension to the two case studies introduces many new concepts that are hard to follow.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. **Extension to High-dimensional Sparse Bandits:** The $\Theta(T^{2/3})$ regret reminds me of the results in high-dimensional sparse linear bandits [1*]. The regret in [1*] is $\Theta(s^{1/3}T^{2/3})$ (where $s$ is the level of sparsity), and the algorithm in [1*] also depends on a sort of "forced exploration." Is it possible to apply the framework developed in this paper to this high-dimensional and sparse setting?
- [1*] Hao, Botao, Tor Lattimore, and Mengdi Wang. "High-dimensional sparse linear bandits." Advances in Neural Information Processing Systems 33 (2020): 10753-10763.
2. **Other Upper Bounds:** The introduction of the minimax regret $\Theta(T^{2/3})$ is somewhat abrupt. It makes me wonder whether in some cases, there is a minimax regret like $\Theta(T^{a/(a+1)})$ for some parameter $a>0$ and why the case $a=2$ is so special that we should focus on it.
3. **General Lower Bound:** Is there any general lower bound for the regret in Theorem 2? While particular cases may have existing lower bounds, a general tight lower bound is expected given that the paper starts from a general framework and provides a general upper bound.
4. **Clarification on Exploration Rate Requirement:** Could the author explain why $\gamma_t \ge u_t/\beta_t$ (in line 138) is needed? In which lemma or proof is this condition necessary?
5. **Comparison of SPB-Matching Rules:** Which rule is better in theory or practice: Rule 1 or Rule 2 in (6)?
6. **High-Level Explanation of Regret Improvement:** Could the author briefly explain why the regret is improved after applying the established framework in Sections 5 and 6? Is there any high-level explanation beyond the technical details?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See the Weaknesses and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for taking the time to carefully review our paper and for providing many questions. Below are our responses to your review.
> The effectiveness of the SPB-matching framework depends on the proper tuning of parameters, which might limit its practical applicability without further optimization techniques.
Thank you for your comment about practical applicability.
However, we may not fully understand the reviewer's intent, so if you have time, we would appreciate it if you could be more specific about what further optimization techniques mean.
If the further optimization problem refers to the optimization of the fractional domination number in graph bandits as described in Eq. (19), note that this optimization problem can be computed efficiently by solving a linear programming problem once before the game begins [13].
Additionally, note that the tuning of parameters can be determined based on past observations, without the need to solve any optimization problem other than FTRL. This is standard in many best-of-both-worlds algorithms (e.g., [26,54,62]).
> there are no experiments, as a result of which, it is unclear how the proposed method works in practice, even in the simulation.
Thank you for your comment. This paper focuses on theoretical aspects, and investigating the numerical performance in simulations or practical scenarios is an important future work.
> ... there is a minimax regret like $\Theta(T^{a/(a+1)})$ for some parameter $a > 0$ and why the case $a=2$ is so special ...
Thank you for pointing out this important point.
The case where $a=2$ is special due to the classification theorem in partial monitoring. Partial monitoring is a very general problem that includes a wide range of sequential decision-making problems as special cases. It is known that, depending on the relationship between the loss matrix $\mathcal{L}$ and the feedback matrix $\Phi$, the minimax regret can be classified into one of four categories: $0$, $\Theta(\sqrt{T})$, $\Theta(T^{2/3})$, or $\Omega(T)$ (see line 484). Among these, the classes with non-trivial difficulties and particular importance are the problems with a minimax regret of $\Theta(\sqrt{T})$ and $\Theta(T^{2/3})$.
This paper focuses on the problems with a minimax regret of $\Theta(T^{2/3})$ due to this classification theorem.
In the revised version, we will explain why we particularly focus on the $\Theta(T^{2/3})$ case.
> Is there any general lower bound for the regret in Theorem 2? While particular cases may have existing lower bounds, a general tight lower bound is expected given that the paper starts from a general framework and provides a general upper bound.
Under the general online learning setup given in Section 2, it is almost impossible to prove a lower bound for Theorem 2.
This general online learning setup is extremely general.
For instance, it includes partial monitoring as a special case. However, partial monitoring itself is very complex, and even when focusing solely on the adversarial regret, no lower bound that depends on variables other than $T$ is known (see Item 14 of Section 37.9 in Lattimore and Szepesvari's book [36]).
Since constructing a lower bound within the general online learning framework is far more challenging than considering a lower bound for partial monitoring, it is almost impossible to prove a lower bound for Theorem 2.
Still, the results for graph bandits (Corollary 9) obtained as a special case of Theorem 2 match the lower bound. Additionally, as mentioned in Global Comments, in the setting of multi-armed bandits with paid observations, the upper bound obtained by SPB-matching in the adversarial regime also matches the lower bound. From these observations, we can see that Theorem 2 matches the lower bound in several problem settings.
> Could the author explain why $\gamma_t \geq u_t / \beta_t$ (in line 138) is needed? In which lemma or proof is this condition necessary?
This condition is used to bound the magnitude of the loss estimator $\hat{\ell}_t$.
For example, in partial monitoring it is used in Eq. (61), and in graph bandits it is used in Eq. (79). Thanks to this lower bound of $\gamma_t$, we can apply Lemma 14 to control the stability term of FTRL (The LHS of (56)).
In the revised version, we will provide a more detailed explanation in line 138 about why this condition is necessary.
Such efforts to bound the magnitude of the loss estimator are not unique to this paper but frequently appear in FTRL-related analyses. For instance, in the Exp3 algorithm [8], which is equivalent to FTRL with Shannon entropy, they employ exploration to bound the magnitude of the *gain* vector.
> Could the author briefly explain why the regret is improved after applying the established framework in Sections 5 and 6? Is there any high-level explanation ...?
The high-level reason for the improvement is that the introduction of SPB-matching allows us to use the Tsallis entropy regularizer with an appropriate exponent $\alpha$. Such improvements have been known in various contexts, such as multi-armed bandits [3,5], sparse bandits (Kwon and Perchet, 2016), and (strongly observable) graph bandits (Zimmert and Lattimore, 2019 and [26]).
To achieve best-of-both-worlds using Tsallis entropy, we need to derive a regret upper bound that depends simultaneously on stability and penalty components [26,28,55].
Our paper makes this possible even for problems with a minimax regret of $\Theta(T^{2/3})$.
- Kwon and Perchet. Gains and losses are fundamentally different in regret minimization: The sparse case, JMLR 2016.
- Zimmert and Lattimore. Connections between mirror descent, Thompson sampling and the information ratio, NeurIPS 2019.
Due to space constraints, we will respond to the items that are considered to have a relatively small impact on the evaluation in the following Comments.
---
Rebuttal 2:
Title: Additional Replies
Comment: Here, we will respond to the items that are considered to have a relatively small impact on the evaluation, which cannot be included in the above rebuttal due to the space constraint.
> Is it possible to apply the SPB-matching framework developed in this paper to high-dimensional sparse linear bandits (by Hao-Lattimore-Wang 2020)?
Thank you for your suggestion.
The paper by Hao, Lattimore, and Wang (2020) on high-dimensional sparse linear bandits considers the stochastic regime.
To the best of our knowledge, there are no studies investigating whether a dimension-free bound is possible in the adversarial regime similar to Hao, Lattimore, and Wang (2020).
Therefore, before investigating the applicability of our SPB-matching framework in this setting, it is necessary to study high-dimensional sparse linear bandits in the adversarial regime.
> Which rule is better in theory or practice: Rule 1 or Rule 2 in (6)?
First, the information available for determining $\beta_t$ differs between Rule 1 and Rule 2. In Rule 1, it is assumed that the information up to time $t-1$, $z_t$, $u_t$, and $\hat{h}_t \geq h_t$ are known when determining $\beta_t$. In Rule 2, it is assumed that the information up to time $t-1$ and $\hat{h}_t \geq h_t$ are known when determining $\beta_t$. Rule 1 is included due to theoretical interest and is not used in Sections 4 and 5. In the revised version, we will explicitly state this and further emphasize that the information available when determining $\beta_t$ differs between Rule 1 and Rule 2. | Summary: In this work, the authors propose a simple and adaptive learning rate for FTRL, which can achieve the minimax regret of $O(T^{2/3})$ for some hard online learning problems. Specifically, they have applied their algorithm (FTRL with the proposed learning rate) to achieve the best-of-worlds regret bounds for partial monitoring (with global observability) and graph bandits (with weak observability), respectively. For the first problem, compared with the best existing results, this paper achieves an $\log(T)/\log(k)$ improvement in the stochastic setting, and an $(\log(T)+\log^2(T)/\log(k))^{1/3}$ improvement in the adversarial setting. But, for the second problem, compared with the best existing results, it seems that this paper can only achieve worse regret bounds (by a factor of $\delta^\ast/\delta$) in the stochastic and adversarial settings.
Strengths: Compared with previous studies, this paper has the following strengths.
1) A simple and adaptive learning rate is proposed for FTRL with forced exploration, which can achieve the minimax regret for some hard online learning problems, such as partial monitoring (with global observability) and graph bandits (with weak observability).
2) The authors have improved the best-of-worlds regret bounds for partial monitoring by utilizing the proposed learning rate. Specifically, compared with the best existing results, this paper achieves an $\log(T)/\log(k)$ improvement in the stochastic setting, and an $(\log(T)+\log^2(T)/\log(k))^{1/3}$ improvement in the adversarial setting.
Weaknesses: However, I have some concerns about this paper.
1) The proposed learning rate is highly inspired by an existing adaptive learning rate in a recent work [26]. Although the existing one can only be utilized to achieve the minimax regret bound of $O(\sqrt{T})$ for some relatively easy problems, the only difference for the proposed learning rate is to further trade off a bias term (with the stability term), which does not bring enough challenges.
2) The motivation for such a learning rate for FTRL is to deal with some hard online learning problems (e.g., partial monitoring and graph bandits) that have been well-studied before. Although this paper has applied their algorithm (FTRL with the proposed learning rate) to achieve the best-of-worlds regret bounds for partial monitoring and graph bandits respectively, it does not bring significant advantages compared with existing results, i.e., only logarithmic improvements for partial monitoring and even worse results for graph bandits.
3) In Section 3, two update rules are proposed for the simple and adaptive learning rate. However, in Sections 4 and 5, it seems that only the second update rule is utilized. So, it is confusing why the authors propose the first update rule and spend non-ignorable spaces in the main text to introduce the corresponding results.
4) The writing of this paper needs to be improved. For example, the majority of the literature review is only proposed in the appendix, which indeed reduces the readability of this paper. In addition, in line 273, the authors emphasize that "Our bound is the first BOBW FTRL-based algorithm with the $O(\log T)$ bound ...". Although this is not wrong, it is not much to be particular about, because there already exist other non-FTRL-based algorithms to achieve the $O(\log T)$ regret bound.
Technical Quality: 2
Clarity: 2
Questions for Authors: Besides the concerns discussed above, I also have the following two questions.
1) Can the authors provide a detailed comparison between the proposed algorithm and existing algorithms in works [26, 54]?
2) It seems that Dann et al. [15] propose a black-box approach to best-of-worlds regret bounds in bandits. Can their approach be utilized to solve the partial monitoring problem?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors have provided some discussions on the limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your thorough and thoughtful review of our work. Here are our responses to your feedback.
> The proposed learning rate is highly inspired by an existing adaptive learning rate in a recent work [26]. ... which does not bring enough challenges.
Yes, as the reviwer pointed out, the proposed adaptive learning rate is highly inspired by [26].
The primary challenge in our paper lies in finding a beneficial form of regret upper bounds that simultaneously depend on both stability and penalty for problems with a minimax regret of $\Theta(T^{2/3})$.
We found that the form of $O\Big( \big(\sum_{t=1}^T \sqrt{z_t h_{t+1} \log T} \big)^{1/3} \Big)$ is beneficial. To our knowledge, this form of regret upper bound is new.
In the revised version, we will include a discussion on these points.
Due to the space constraint, please refer to *primary challenge in our paper* in Global Comments for further details.
> ... only logarithmic improvements for partial monitoring and even worse results for graph bandits.
and
> ... the authors emphasize that "Our bound is the first BOBW FTRL-based algorithm with the $O(\log T)$ bound ...". it is not much to be particular about, ...
**Logarithmic improvement**
First, logarithmic improvement is important. Historically, significant efforts have been dedicated to achieving an $O(\sqrt{kT})$ regret upper bound without the logarithmic term in multi-armed bandits [3,5]. Attempts to improve this logarithmic term have subsequently led to the development of the notable best-of-both-worlds algorithm in multi-armed bandits [61]. Furthermore, beyond multi-armed bandits, there are numerous efforts aimed at improving the logarithmic factor to enhance performance, for example, (Kwon and Perchet 2016; Zimmert and Lattimore 2019; Eldowa et al. 2023). Based on these points, we can see that the improvement of the logarithmic factor is important.
References
- Kwon and Perchet. Gains and losses are fundamentally different in regret minimization: The sparse case, JMLR 2016.
- Zimmert and Lattimore. Connections between mirror descent, Thompson sampling and the information ratio, NeurIPS 2019.
- Eldowa et al. On the Minimax Regret for Online Learning with Feedback Graphs, NeurIPS 2023.
**Comparison with graph bandits result in Dann et al. [15]**
As pointed out, the regret upper bound for our graph bandits indeed deteriorates compared to the bound of Dann et al. [15] (this difference arises because the (fractional) weak domination number in Dann et al. becomes the fractional domination number in our bound, and note that this change only worsens the bound when the feedback graph has a self-loop).
However, the approach of Dann et al. [15] employs a highly complex, multi-stage reduction approach. Moreover, it has the disadvantage of discarding past observations, similar to the doubling-trick.
We have demonstrated that the framework of Follow-The-Regularized-Leader alone can achieve an upper bound similar to that accomplished by this highly complex approach, which is a significant theoretical advancement.
**Generality to other problem settings**
Using our SPB-matching, we believe that best-of-both-worlds algorithms can be developed for a wide range of problems with a minimax regret of $\Theta(T^{2/3})$.
For instance, the authors found after the submission that it is possible to achieve best-of-both-worlds bounds in multi-armed bandits with paid observations [53] by SPB-matching.
Due to the space constraint, please refer to *Generality to Other Problem Settings* in Global Comments for further details.
> ... two update rules are proposed for the simple and adaptive learning rate. However, in Sections 4 and 5, it seems that only the second update rule is utilized. So, it is confusing ...
Thank you for your suggestion.
Rule 1 was included for theoretical interest, and as the reviewer pointed out, it is not used in Sections 4 and 5.
In the revised version, we will emphasize that only Rule 2 is used in Sections 4 and 5.
> ... the majority of the literature review is only proposed in the appendix, ...
Due to page constraints, much of the literature review had to be included in the appendix. In the revised version, we will aim to include as much of the literature review as possible in the main text.
Below, we provide replies to the questions.
> It seems that Dann et al. [15] propose a black-box approach to best-of-worlds regret bounds in bandits. Can their approach be utilized to solve the partial monitoring problem?
Thank you for your question. By employing their black-box reduction approach, it seems possible to achieve an upper bound of the same order as our upper bound in globally observable partial monitoring.
Apologies for not realizing this at the time of submission. This is because Dann et al.'s method cannot be used in partial monitoring games with minimax regret of $\sqrt{T}$ unless the loss of the selected action is observed [56].
Nevertheless, as previously mentioned, the approach by Dann et al. [15] is a complicated approach involving multi-stage reductions and has the drawback of discarding past observations, similar to the doubling-trick.
Hence, demonstrating that using the FTRL framework alone can achieve the same upper bound is a significant theoretical advancement.
In the revised version, we will include the regret upper bound obtained by Dann et al. in Table 1 and describe the differences between their method and the advantages of using FTRL directly.
Due to space constraints, we will respond to the items that are considered to have a relatively small impact on the evaluation in the following Comments.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. I currently do not have additional questions, and will make my final decision after further discussing with other reviewers and AC.
---
Rebuttal 2:
Title: Additional Replies
Comment: Here, we will respond to the items that are considered to have a relatively small impact on the evaluation, which cannot be included in the above rebuttal due to space constraints.
> What are the lower bounds for partial monitoring with global observability?
To the best of our knowledge, beyond the dependency on $T$, little is known about the lower bound for partial monitoring (with global observability).
In the adversarial regime, the dependency on $T$ is known to be $\Omega(T^{2/3})$ [9, 34, 35].
However, little is understood about the dependencies on the variables $k$, $d$, and $m$, which is also mentioned in Section 37.9 of [36].
In the stochastic regime, an asymptotic distribution-dependent lower bound of $\Omega(D (\nu^* )\log T)$ is known [30], where $D(\nu^*)$ denotes the optimal value of a certain complex optimization problem.
However, it is not clear how this $D(\nu^*)$ depends on $k$, $d$, and $m$, and investigating this is important future work.
In the revised version, we will discuss that little is known about the dependencies other than $T$ for the lower bound in partial monitoring.
> Can the authors provide a detailed comparison between the proposed algorithm and existing algorithms in works [26, 54]?
(Comparison with [26])
As mentioned above, this study considers problems with the minimax regret of $\Theta(T^{2/3})$, and an ideal form dependent on the stability component $z_t$ and penalty component $h_t$ are different.
(Comparison with [54])
In [54]:
- Their main regularizer is the negative Shannon entropy.
- The regret upper bound in the stochastic regime is $O((\log T)^2)$.
- Their learning rate (for globally observable games) is highly complicated.
Our approach:
- Our main regularizer is the negative Tsallis entropy.
- The regret upper bound in the stochastic regime is $O(\log T)$.
- The learning rate is designed based on a simple principle that matches stability, penalty, and bias components. | Summary: This paper considers the online problems with a minimax regret of $\Theta(T^{2/3})$ and proposes a new learning rate tuning based on the FTRL algorithm to tackle these problems. The development of the learning rate is straightforward and simpler compared to previous works. The proposed algorithm is versatile in various applications, and the authors prove that it can achieve improved best-of-both-worlds regret bounds.
Strengths: - This paper is technical solid though the discussion for the techniques is easy-to-follow.
- It proposes a simpler and straightforward tuning strategy, which leads to better best-of-both-world theoretical guarantees in partial monitoring and graph bandit problems.
- The applications justify their findings.
Weaknesses: The main technique of designing learning rates so that the stability and bias terms are matched seems to already appear in the cited paper [26], and the development of the method seems to focus on how to solve the additional term introduced by the 'forced exploration' strategy. With a quick glance at the cited paper [26], in Section 5.1, this paper also employs the 'forced exploration' strategy, except that the balance ratio $\gamma$ is fixed. It would be beneficial to further discuss the difficulties encountered when tackling the 'hard problem'.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could the author respond to the concerns in the 'Weaknesses'?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your valuable time and helpful comments.
Below is our response to your comments.
> With a quick glance at the cited paper [26], in Section 5.1, this paper also employs the 'forced exploration' strategy, except that the balance ratio is fixed. It would be beneficial to further discuss the difficulties encountered when tackling the 'hard problem'.
The primary difficulty in our paper lies in finding a beneficial form of regret upper bounds that simultaneously depend on both stability and penalty for problems with a minimax regret of $\Theta(T^{2/3})$.
For problems with a minimax regret of $\Theta(\sqrt{T})$, it is known that the form of $O\Big(\sqrt{\sum_{t=1}^T z_t h_{t+1} \log T}\Big)$ for the stability component $z_t$ and the penalty component $h_t$ allows us to derive best-of-both-worlds bounds [26, 28, 55].
However, for problems with a minimax regret of $\Theta(T^{2/3})$, it is non-trivial to determine the form of regret upper bounds that are useful for achieving best-of-both-worlds bounds.
We found that the form of $O\Big( \big(\sum_{t=1}^T \sqrt{z_t h_{t+1} \log T} \big)^{1/3} \Big)$ is beneficial. To our knowledge, this form of regret upper bound is new.
In the revised version, we will include a discussion on these points.
Regarding the difference on the exploration rate from Section 4.1 of [26] (we believe Section 5.1 is a typo for Section 4.1), they only consider problems with a minimax regret of $\Theta(\sqrt{T})$, and they set the exploration rate $\gamma_t$ either to $\gamma_t = 0$ or $\gamma_t \simeq z_t / \beta_t$.
When $\gamma_t \simeq z_t / \beta_t$, the additional regret due to this exploration is of the same order as the stability term, and thus a tuning strategy of the learning rate $\beta_t$ remains the same.
In contrast, in problems with a minimax regret of $\Theta(T^{2/3})$, due to the nature of handling indirect feedback, forced exploration becomes necessary. As a result, the magnitude of the loss estimator increases, resulting in $\gamma_t$ appearing in the denominator of the stability term (see Lines 134--140). Hence, if we use $\gamma_t \simeq z_t / \beta_t$, the stability term becomes excessively large. Setting a larger exploration rate, $\gamma_t \simeq \sqrt{z_t / \beta_t}$, resolves this issue.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses. I will keep my positive score. | Rebuttal 1:
Rebuttal: Thank you very much for your valuable time and thorough, insightful reviews.
While we have replied directly to each reviewer, there are some important points that we could not fully explain due to space constraints.
Therefore, we are addressing these points as global comments here. Please feel free to let us know if you have any further questions.
**Primary challenge in our paper**
The primary challenge in our paper lies in finding a beneficial form of regret upper bounds that simultaneously depend on both stability and penalty for problems with a minimax regret of $\Theta(T^{2/3})$.
For problems with a minimax regret of $\Theta(\sqrt{T})$, it is known that the form of $O\Big(\sqrt{\sum_{t=1}^T z_t h_{t+1} \log T}\Big)$ for the stability component $z_t$ and the penalty component $h_t$ allows us to derive best-of-both-worlds bounds [26, 28, 55].
However, for problems with a minimax regret of $\Theta(T^{2/3})$, it is non-trivial to determine the form of regret upper bounds that are useful for achieving best-of-both-worlds bounds.
We found that the form of $O\Big( \big(\sum_{t=1}^T \sqrt{z_t h_{t+1} \log T} \big)^{1/3} \Big)$ is beneficial. To our knowledge, this form of regret upper bound is new.
In the revised version, we will include a discussion on these points.
**Generality to other problem settings**
Using our SPB-matching, we believe that best-of-both-worlds algorithms can be developed for a wide range of problems with a minimax regret of $\Theta(T^{2/3})$.
For instance, the authors found after the submission that it is possible to achieve best-of-both-worlds bounds in multi-armed bandits with paid observations [53] by SPB-matching.
In this setting, the learner can observe the loss of any action by paying a cost, and the goal of the learner is to minimize the sum of the regret and total paid costs for observations.
The cost of observations behaves similarly to the bias term in the stability–penalty–bias decomposition, and thus our SPB-matching learning rate can be applied.
In particular, we can show that the sum of the regret and paid costs is roughly bounded by
$
O\big(
(c k \log k)^{1/3} T^{2/3}
+
\sqrt{T \log k}
\big)
$
in the adversarial regime and by
$
O\big(
\max\\{c,1\\} k \log k \log T / \Delta_{\min}^2
\big)
$
in the stochastic regime for the cost of observation $c$.
This is the first best-of-both-worlds bounds for multi-armed bandits with paid observations, and the bound for the adversarial regime is of the same order as [53, Theorem 3], demonstrating the effectiveness of SBP-matching.
The proof is almost the same as in the cases of graph bandits and partial monitoring. The revised version will include this discussion. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper introduces a novel adaptive learning rate framework for Follow-the-Regularized-Leader (FTRL) tailored to online learning problems characterized by a minimax regret of Θ(T^2/3). This new learning rate, termed Stability-Penalty-Bias (SPB) matching, is designed by balancing stability, penalty, and bias terms within the decomposition of the regret as $\text{Reg}\_T \\leq \\sum\_{t=1}^T \\frac{z\_t}{\\beta\_t \\gamma\_t} + \\beta\_1 h\_1 + \\sum\_{t=2}^T (\\beta\_t - \\beta\_{t-1}) h\_t + \\sum\_{t=1}^T \\gamma\_t$. The paper demonstrates the efficacy of this approach through its application to two significant “hard” online learning problems: partial monitoring and graph bandits, both of which involve indirect feedback. The proposed framework improves upon existing Best-of-Both-Worlds (BOBW) regret bounds, providing simpler yet effective learning rates compared to existing methods and consequently achieving $O(\\log T)$ regret in the stochastic regime and $O(T^{2/3})$ in the adversarial regime.
Strengths: This work proposes a novel and interesting unified framework for analyzing regret in online learning problems, inspired by previous work with similar methods of regret decomposition.
This enables new BOBW guarantees for the problems of partial monitoring with global observability and graph-feedback bandits with weak observability, significantly improving over previous work.
The main results are also introduced with sufficient generality that can help the application and adoption of the proposed SPB-matching technique to other “hard” problems as suggested by the authors.
Additionally, the same method is shown to retrieve regret bounds under the more general setting with self-bounding constraints.
The algorithm is simple to understand for readers who have familiarity with the related work. It is a variant of the well-known follow-the-regularized-leader (FTRL) with additional exploration (given the weak observability of the considered problems), negative Tsallis entropy regularizer (with an appropriate tuning of its parameter $\\alpha = 1-1/(\\log k)$), and careful tuning of the learning rate via the proposed SPB-matching.
Even considering the simplicity of the main methodology, the observations and ideas behind it are nontrivial and clever.
They required some care to have all the pieces fit together to achieve the final results.
A secondary but nontrivial observation made in this work is that the generality of the Tsallis entropy provides an improvement to the regret bounds compared to using only Shannon entropy and log-barrier.
A similar phenomenon has already been noted in previous work, but to the best of my knowledge, this is the first time this observation has been made for the problems considered here.
Finally, the presentation slowly introduces the SPB-matching technique and the ideas behind it help understand how the algorithm works for providing the desired guarantees.
This might also help in adopting similar techniques in other settings, possibly providing new and improved BOBW results in additional “hard” online learning problems.
Weaknesses: There is a clear effort by the authors to provide a smooth and clear description of the SPB-matching technique.
However, the large number of parameters made it hard to fully understand how each would exactly influence the analysis of the final regret bound of the algorithm.
Only after multiple reads did the full details click and it was clear how all the pieces nicely fit together.
In any case, this appears to be caused by the generality of this unified framework and is likely to be unavoidable.
Furthermore, the regret bounds in Corollaries 9 and 11 are not fully clear because of the presence of $\\kappa$ and $\\kappa’$.
Given that all the other parameters have specific values for the two problems considered here, and both $\\kappa$ and $\\kappa’$ are a function of them, replacing them and thus having an explicit dependence on the actual parameters of the problems could make it easier to understand the regret guarantees in the two corollaries and to compare them with previous related results.
Even a simple comment on the influence of $\\kappa$ and $\\kappa’$ to the regret bound for in these two specific cases would help the reader.
Technical Quality: 4
Clarity: 3
Questions for Authors: - The possibility of adopting similar ideas to other “hard” online learning problems is suggested in the conclusions. However, while the hardness of the problems considered here is mainly related to the observability of the losses, other problems have different characteristics that make them “hard”. For example, the additional hardness of bandits with switching costs is about the loss being adaptive to the previous action of the learner, leading to an extra cost whenever the learner changes the action to play. On the other hand, SPB-matching is related to the presence of the bias term due to having extra exploration to tackle the limited observability. Do the authors believe a variant of SPB-matching could help with these additional problems? Or do they believe these problems would require different novel ideas? Just some vague intuition about this would be appreciated.
- What are the lower bounds for partial monitoring with global observability? I think being clearer about this would help the reader understand the quality of your regret bound for this problem.
Minor comments/typos:
- Line 211: “literature” instead of “litarature”
- Line 221: l.h.s. of the inline equation should be $\\sum\_{c=1}^k \\bigl( G(c,\\Phi\_{cx})\_{a} - G(c,\\Phi\_{cx})\_{b} \\bigr)$
- Line 695: duplicate “which is not desirable.”
- Line 700: “Shannon” instead of “Shanon”
- Lines 701-702: “dominating” instead of “domination”
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors address potential limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your valuable time and thorough review.
Regarding the minor comments and typos, we will address and correct them in the revised version.
Below is our response to the review:
> the regret bounds in Corollaries 9 and 11 are not fully clear because of the presence of $\kappa$ and $\kappa'$. Given that all the other parameters have specific values for the two problems considered here, and both $\kappa$ and $\kappa'$ are a function of them, replacing them and thus having an explicit dependence on the actual parameters of the problems could make it easier to understand the regret guarantees in the two corollaries and to compare them with previous related results. Even a simple comment on the influence of $\kappa$ and $\kappa'$ to the regret bound for in these two specific cases would help the reader.
Thank you for your valuable comments.
For the partial monitoring problem, when we use $\beta_1 = 64 c_{\mathcal{G}}^2 / (1 - \alpha)$, which satisfies the first inequality in Eq. (17), we have $\kappa = O(c_{\mathcal{G}}^2 \log k + k^{3/2} (\log k)^{5/2})$ and
$\kappa' = \kappa + O\left( ( c_{\mathcal{G}}^{2/3} (\log k)^{1/3} + \sqrt{c_{\mathcal{G}} \log k} ) ( \frac{1}{\Delta_{\min}^2} + \frac{C}{\Delta_{\min}} ) \right)$.
For the graph bandit problem, when we use $\beta_1 = 64 \delta^* / (1 - \alpha)$, which satisfies the first inequality in Eq. (20), we have $\kappa = O(\delta^* \log k + k^{3/2} (\log k)^{5/2})$ and
$\kappa' = \kappa + O\left( ( (\delta^* \log k)^{1/3} + \sqrt{\delta^* \log k} ) ( \frac{1}{\Delta_{\min}^2} + \frac{C}{\Delta_{\min}} ) \right)$.
In the revised version, we will make the influence of $\kappa$ and $\kappa'$ explicit.
> Do the authors believe a variant of SPB-matching could help with these additional problems? Or do they believe these problems would require different novel ideas? Just some vague intuition about this would be appreciated.
Thank you for pointing out this important point.
We believe that our approach may not be applicable to all problems with the minimax regret of $\Theta(T^{2/3})$ (hard problems), and as the reviewer suggests, different novel ideas might be required.
Still, if a hard problem has a structure similar to hard graph bandits and partial monitoring problems that requires additional exploration, we believe that the best-of-both-worlds bounds can be achieved by SPB-matching.
For instance, the authors found after the submission that it is possible to achieve best-of-both-worlds bounds in multi-armed bandits with paid observations [53] by SPB-matching.
In this setting, the learner can observe the loss of any action by paying a cost, and the goal of the learner is to minimize the sum of the regret and total paid costs for observations.
The cost of observations behaves similarly to the bias term in the stability–penalty–bias decomposition, and thus our SPB-matching learning rate can be applied.
In particular, we can show that the sum of the regret and paid costs is roughly bounded by
$
O\big(
(c k \log k)^{1/3} T^{2/3}
+
\sqrt{T \log k}
\big)
$
in the adversarial regime and by
$
O\big(
\max\\{c,1\\} k \log k \log T / \Delta_{\min}^2
\big)
$
in the stochastic regime for the cost of observation $c$.
This is the first best-of-both-worlds bounds for multi-armed bandits with paid observations, and the bound for the adversarial regime is of the same order as [53, Theorem 3], demonstrating the effectiveness of the SBP framework.
The proof is almost the same as in the cases of graph bandits and partial monitoring. The revised version will include this discussion.
> What are the lower bounds for partial monitoring with global observability?
To the best of our knowledge, beyond the dependency on $T$, little is known about the lower bound for partial monitoring (with global observability).
In the adversarial regime, the dependency on $T$ is known to be $\Omega(T^{2/3})$ [9, 34, 35].
However, little is understood about the dependencies on the variables $k$, $d$, and $m$, which is also mentioned in Section 37.9 of [36].
In the stochastic regime, an asymptotic distribution-dependent lower bound of $\Omega(D (\nu^* )\log T)$ is known [30], where $D(\nu^*)$ denotes the optimal value of a certain complex optimization problem.
However, it is not clear how this $D(\nu^*)$ depends on $k$, $d$, and $m$, and investigating this is important future work.
In the revised version, we will discuss that little is known about the dependencies other than $T$ for the lower bound in partial monitoring.
---
Rebuttal Comment 1.1:
Comment: Thank you for the exhaustive responses to my questions and the useful insights. I am keeping my positive score. | null | null | null | null | null | null |
Amortized Planning with Large-Scale Transformers: A Case Study on Chess | Accept (poster) | Summary: This paper introduces an open dataset comprised of real chess positions annotated with a evaluation score, the best move to play, and a score for each legal move according to StockFish 16 using 50ms for each action. They then train large transformers over this dataset, and include in-depth analysis through making the model play on Lichess, and using ablation studies over the hyperparameters of the network.
Strengths: - The paper provides an extensive analysis of the current state and abilities of their trained Transformer on ChessBench, giving a really strong starting point for future work on the subject.
- The promising results of using a Transformer architecture to replicate StockFish's evaluation point to the ability of large models to be able to perform well in the area of planning for complex tasks.
- The dataset introduced by the authors is quite extensive, and could prove very useful for future research
Weaknesses: - The dataset might be of relatively poor quality, as it uses Stockfish 16 with only 50ms for each evaluation. This could make a significant proportion of the evaluations be wrong, and potentially limit the playing strength of models trained using this data. (Table 1. show a significant (>200 elo) disparity in playing strength between 0.05s and 1.5s for stockfish)
- The goal of the paper is not immediately obvious. A first reading of the abstract and the introduction seem to imply with the title that the paper is concerned about the general abilities of Transformers to tackle planning problems, while the contributions are more focused on providing a dataset (as well as an initial case study on it) for future research on this topic, focused on the case of chess.
Technical Quality: 4
Clarity: 4
Questions for Authors: Section 2.2 mentions that the problem was seen as a classification problem, binning the win percentages into K uniform bins. Was augmenting the density of bins around 50% in order to improve accuracy in critical positions considered?
Table A3(a) in the appendix seems to show that stockfish with 0.05s per move takes 1.5s per move on average. What does this mean? Wasn't the 50ms limit a strict limit?
Appendix A9 mentions the hardware used for the experiments, but not how long it took to train each model. Would it be possible to add this information?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful and positive feedback.
**Is the dataset of poor quality if you use Stockfish with only 50ms evaluation time (as there is a significant Elo disparity between 0.05s and 1.5s for Stockfish)? Also, why does Stockfish with 0.05s evaluation time take 1.5s to play a move on average?**
The dataset is of high (super grandmaster) quality, as evidenced by the fact that Stockfish with 0.05s, i.e., the data-generating oracle, achieves an Elo of 2713 against other bots on Lichess. While higher time limits would lead to even stronger annotations, there is a trade-off in terms of computing time spent on collecting the dataset (currently 8680 days of unparallelized compute). Note that we did, however, create smaller-scale datasets with higher annotation time limits in Table A2, showing that none of our networks can currently fully match the performance of the data-generating oracle(s), so the current frontier seems to lie with improving models rather than the playing-strength of the data-generating oracle.
To clarify the 0.05s vs. 1.5s evaluation time, we consider two versions of Stockfish:
* *Data-generating oracle*: For a given board, the oracle *evaluates* every legal move separately for 0.05s (i.e., if there are, e.g., 30 legal moves, Stockfish uses a total time of 1.5s to *play* a move). We primarily compare to this version since this is how we constructed the dataset.
* *“Standard-play” Stockfish*: For completeness, we also include Stockfish as it would typically be used in standard play, i.e., its evaluation time is restricted per state but not per move, meaning that some clearly suboptimal moves can receive very little time (based on Stockfish’s pruning) and the resulting extra budget is instead spent on more promising moves. Since there are roughly 30 legal moves per board on average, we choose a time limit of 30 * 0.05s = 1.5s per board state for “standard-play” Stockfish. This leads to an improvement in play of >200 Elo compared to the version that spends a fixed 50ms on each legal move per board state, but we only include this baseline as a “reference” (i.e., it is not reflected in the dataset).
We agree that the difference between these two versions is subtle, and we will clarify it in the next revision of our paper.
**How long did it take to train each model?**
The 9M models trained at roughly 270 steps per second, yielding a total training time of 10M / 270 * 3600) = 10.2 hours. The 136M models trained at approximately 26 steps per second, yielding a total training time of 10M / (26 * 3600 * 24) = 4.45 days. The 270M models trained at roughly 13 steps per second, yielding a total training time of 10M / (13 * 3600 * 24) = 8.9 days. We will add these details to Appendix A.6, which describes our hardware setup.
**Did you consider non-uniform binning to improve accuracy in critical positions?**
We initially experimented with non-uniform binning but quickly abandoned the approach due to its increased complexity (e.g., state-value expansion is non-trivial with non-uniform bins due to its minimax nature) and failure to produce large performance gains. However, we did ablate the number of bins in Table A2, showing that an increased resolution improves performance, but only up to a certain point. Based on our experience, non-uniform binning can lead to gains when the overall number of bins is very small, but these gains diminish relatively rapidly with medium to large numbers of bins.
---
Rebuttal Comment 1.1:
Comment: I acknowledge that I have read the rebuttal. The revisions and clarifications are welcome, and I will maintain my evaluation. | Summary: This paper aims to solve the challenging problem in the chess game. The main contributions include building a large-scale dataset and training a large-scale transformer model with the collected dataset. The proposed approach has shown a significant outperformance over the baselines.
Strengths: The writing is mostly clear.
The empirical performance shows a great improvement.
Chess is a challenging game to evaluate the level of artificial intelligence, and the collected dataset can be used for the following research works.
Weaknesses: The technique novelty of this work is incremental, as the techniques used in the proposed approach have been previously developed. The usage of transformer in amortized planning is a contribution, but not significant enough.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can the authors elaborate on whether the transformer structure is more important or the dataset is more important for performance improvement?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have discussed the potential limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful feedback.
**Is the transformer or the dataset more important for performance improvement?**
Although this cannot be determined with certainty given our results, we investigated this question in Table A2, where we ablated model architecture and the time limit used when annotating a move with Stockfish for the dataset creation. Table A2 shows that using a higher time limit does lead to some performance gains (e.g., 1.4% higher puzzle accuracy) but at an extremely high computational cost (i.e., doubling the total unparalleled annotation time from 8680 days to 17360 days). Therefore, we compromise between computational effort and final model performance and use 0.05s per state-action value annotation.
**Using transformers for amortized planning is a contribution, but not significant enough.**
We respectfully disagree. Though the approach of amortizing a planning algorithm with a fixed-parametric function approximator may be evident to some researchers, it is typically not the mainstream interpretation. Large sequence models have been repeatedly dismissed as mere “statistical pattern matchers”, “stochastic parrots”, or “curve fitters”; implying that this approach is insufficient to capture complex algorithmic behavior. While the results of large foundation models may speak for themselves w.r.t. this criticism, we believe it is important to advocate and thoroughly test the amortization viewpoint. Just to reiterate: the state space of chess is very large - even when only considering the “natural” distribution of board states from games on lichess.org, all but the most trivial games face our networks with mostly unseen board states, where accurate value estimates and/or actions require considerable generalization beyond the training data. Our results cannot be explained as memorization with a bit of interpolation or simple statistical pattern matching, and not too long ago, such results were thought to be unachievable without explicit planning. Our results also show that transformers (up to the size used in our experiments) cannot always fully match Stockfish. Together, this raises two important future questions: (i) What is missing to match Stockfish’s performance; is it just a matter of scale, or can we identify systematic shortcomings of transformers? (ii) How do transformers implement amortized planning, and how is it related to architectural parameters such as depth? While these questions are beyond the scope of our current work, we believe that our results and dataset lay excellent groundwork to investigate these questions in the future. | Summary: The paper introduces a large dataset of chess board states along with annotations for best moves and state-(action)-values. It demonstrates that transformers trained on this dataset with supervised learning can achieve significant performance and generalization. This adds to the growing evidence showing that neural nets can implement complex behaviors (such as imitating search-based chess engines).
Strengths: - There is continuing debate about the extent to which neural networks learn generalizing non-trivial algorithms, so this is a timely addition.
- The paper has extensive experiments with a wealth of different metrics and some interesting ablations.
- I like the discussion in section 4 and elsewhere. It's transparent about potential issues, but in my view none of these are a big issue for the overall takeaways, and it seems fine to add manual workarounds to deal with them (like the authors do).
- The writing is very clear and covers all the details I was interested in. Overall, I really like the execution of this paper.
Weaknesses: EDIT: see my reply (https://openreview.net/forum?id=XlpipUGygX¬eId=UsMmll2N3a) for my updated stance.
The paper does not add too much that didn't already exist outside peer-reviewed ML venues. As the authors discuss (line 312), the existing Leela Chess Zero network is stronger than the network introduced by this paper. The T82 version of Leela (which this paper compares against) is also trained using supervised learning and uses a transformer architecture similar to the one in this paper. Similarly, datasets of chess board states with engine annotations exist (e.g. at https://database.lichess.org/). I'm not aware of datasets with state-action-value annotations for all moves in a state, but it's not clear that this is even necessary to train a state-action-value predictor.
In my mind, the main contribution of this paper is thus creating a well-documented and peer-reviewed chess-playing neural network and dataset, as well as running ablations (which likely don't come as a surprise to the chess engine community, but seem interesting for the broader NeurIPS audience). This is still a nice contribution (though I wish it was made clearer from the start that this is what the paper does, e.g., in the intro/"contributions" paragraph. EDIT: the authors' changes do make this clear now).
Minor notes:
- Regarding why BC does worse than value predictions (appendix B.5): I don't think the richer training signal is the full explanation, since the value nets of Leela/AlphaZero also outperform their policy nets, even though those policy nets are trained to imitate a full distribution (rather than the single best move). My guess is that another reason is the weak form of "1-ply search" inherent in letting the network evaluate many different actions/follow-up states and taking an argmax.
- Line 177: T82 is a transformer, not a ConvNet (see more below)
- Nitpick for table 1 and elsewhere: Leela and AlphaZero get the past eight board states as input, rather than the full move history (though the lack of even earlier states shouldn't matter much). I think the most notable difference is in fact between GPT-3.5 vs all the other networks, since GPT-3.5 gets *only* moves and so has learn to keep track of the board state itself. So in the "Input" column, I might distinguish between "PGN" (for GPT-3.5), "FEN" (for the new models), and "FEN + past states" (for Leela and AlphaZero).
- On the claim that the network plays at grandmaster level: this is a difficult comparison to make, since human players play better when given more time per move, whereas the neural network of course doesn't improve from time beyond what's needed for one or a fixed number of forward passes. I agree that the network seems as strong as grandmasters with Blitz time controls, but it might still be significantly weaker than grandmasters at classical time controls. I think this distinction isn't always clear (e.g. line 44).
Detailed notes on Leela vs the new model in this paper: The T82 version of Leela that this paper compares against was trained using supervised learning on MCTS data, rather than trained with online RL (this isn't well-documented, but see https://lczero.org/play/networks/basics/ and note that T82 is one of the "contrib runs"). It is also a transformer with an input encoding not too different from the one used in this paper (one token position per square of the chess board). The main differences as far as I can tell are that Leela has some domain-specific improvements to its architecture (https://lczero.org/blog/2024/02/transformer-progress/, Smolgen is part of T82), uses fewer parameters, and was trained on different (and probably more) data.
Technical Quality: 4
Clarity: 4
Questions for Authors: (pretty minor)
- If I understand correctly, all three types of predictors use entirely separate models. I'm wondering whether training a single model with a shared body and three small prediction heads would lead to transfer between predictors (e.g. let the BC policy profit from the larger amount of data used to train the state-action-value predictor). Is this something you considered/tried?
- Why is the state-value predictor trained only on the smaller amount of data (compared to the state-action-value predictor)? Couldn't you use the state-action-value annotations as state-value annotations for the follow-up state, which would mean you'd already have all the training data needed to also train the state-value predictor on more data?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 2
Limitations: Issues are discussed well in the paper
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful study of our paper and, in particular, for all their clarifying remarks on lc0, which have helped us to improve the description of that baseline.
**Lc0’s T82 network is a domain-specific transformer (rather than a ConvNet), trained using supervised learning on an unknown amount of MCTS data (rather than using online RL).**
Thank you for these clarifications and all the additional information on lc0! We agree with the reviewer that these details are not well-documented, which makes a scientific comparison beyond evaluating playing strength difficult (e.g., it is unclear which and how much data was used to train T82). We have updated the description of lc0 in Section 2.5 as follows:
“**Leela Chess Zero** We consider three variants: (i) with 400 MCTS simulations, (ii) only the policy network, and (iii) only value network (where (ii) and (iii) perform no additional search). We use the T82 network, which is a transformer and the largest network available on the official Leela Chess Zero website [5]. T82 uses one token per square of the chess board, has fewer parameters (94M vs. up to 270M for our models), includes some domain-specific architecture changes (e.g., Smolgen [29]), and was trained using supervised learning on MCTS data.”
**The grandmaster claim against humans may only hold with Blitz time controls (since human performance improves with more evaluation time, unlike the models trained in this paper), which isn’t always clear in the paper.**
We fully agree that our results only imply grandmaster-level performance against humans with Blitz time controls, and we will clarify this distinction throughout the paper. For example, we have changed line 44 to “... playing chess at a high level (grandmaster) against humans *with Blitz time controls*”.
**BC does worse than value predictions because it performs a weak form of “1-ply search” rather than having a poorer training signal (lc0’s/AZ’s policy nets are also worse even though they imitate the full distribution, i.e., not just the best move).**
This is an interesting observation. Indeed, multiple factors probably contribute to BC’s relatively weak performance, and we agree that inference-time FLOPS (i.e., performing a less expansive “1-ply search” than the value-based methods) is probably also a major factor. We have expanded the discussion of these results in Appendix B5 accordingly.
**AZ and lc0 operate on the past 8 board states rather than the full move history.**
Thank you for the clarification! We agree that the reviewer’s proposed distinction of “PGN” for GPT-3.5 and “FEN + past states” for AZ and lc0 is more precise and have updated Table 1 and the description in Section 3.1 accordingly.
**Did you try training a single model with three prediction heads for the different predictor targets (AV, BC, SV) to enable transfer between them (e.g., more data for BC)?**
Thank you for this interesting suggestion! We have not tried training a single shared model with multiple heads. A priori, it is not clear whether this would increase overall performance, lead to catastrophic interference, or make little difference overall since all three targets are highly related. It is also unclear whether increased model capacity would be needed (or not) and how to combine all three predictions into a single policy (though there are obvious candidates). We believe that investigating these questions is out of scope for our current work, and have added them to a future work section in the discussion of our paper.
Note that we did conduct an ablation where all three predictor targets were trained on the same amount of data in Table A4 to investigate whether our observed differences in performance were simply a matter of different amounts of data.
**Why is the state-value predictor not trained on the (much larger) state-action-value dataset, using the state-action values as state-values for the next state?**
Indeed, that is an astute observation, which would cut the annotation time in half when creating the state value and action value datasets (assuming that Stockfish’s evaluation is symmetric). For our paper, we only trained the action value predictor on our largest dataset of ~15 billion data points. However, we also conducted the ablation where all three predictor targets are trained on exactly the same amount of data in Table A4, showing that the state value predictor performs more or less on par (slightly better, by 12 Elo) with the action value predictor. A priori, it seems plausible that learning to predict state and action values is of roughly equal complexity (in deterministic environments), but repeating our experiment from Table A4 (i.e., using the same number of training data points for all policies) at even larger scales would be an interesting direction for future work.
---
Rebuttal Comment 1.1:
Comment: Thank you for those changes! They clear up all my concerns about the presentation of the motivation/contributions and other details. After more consideration, I've decided to increase my score to 6. In my mind, the main reason against acceptance is still that the paper's main findings aren't very surprising to anyone sufficiently familiar with Lc0, but I think the reasons in favor of acceptance outweigh that:
* Most of the ML community is not, in fact, familiar with Lc0, in part because of a lack of scientific/peer-reviewed writings on it. And for this majority of readers, I think the findings are quite interesting and important.
* As mentioned, I think this paper is very well done in terms of writing and experiments.
* Having a simpler and well-documented/reproducible chess-playing transformer could enable research that would have required significantly more effort on Lc0. So I think this paper has good potential for indirect impact via future work using these models and/or datasets. | Summary: The paper is well-written and demonstrates that supervised learning with Stockfish evaluations can create a strong chess engine. The methodology is solid, and the inclusion of an LLM as a baseline is noteworthy. However, the motivation is ambiguous and the use of transformers over CNNs is questioned. Additionally, the paper lacks clarity on certain statements, and the technical contribution is limited. Ablation studies on training objectives are also recommended.
Strengths: - This paper is well-written and easy to follow
- It is interesting to know that supervised learning with Stockfish evals can be a strong chess engine baseline
- The methodology is solid and well-explained
- The authors include an LLM as a baseline, it is interesting to see a proper evaluation of how well LLMs can play chess
Weaknesses: - W1: The motivation for this work is somewhat ambiguous. On the one hand, the authors claimed that the main idea is to approximate the search engine Stockfish, by distilling its board evaluation results into NNs (in particular transformers), but the evaluation is on chess performance instead of directly comparing the predicted and ground truth centipawn advantages. Although the action prediction evaluation is kind of doing that, it is still discretized to actions before comparing the rankings, not comparing the continuous values directly. Therefore the claim to approximate stockfish is invalid. On the other hand, the proposed system did really well in chess, it seems that the authors are trying to build a strong chess engine without searching. But it is still underperforming lc0 inference without search.
- W2: Why transformers are particularly interesting for this study? Many prior works have shown that the multi-channel encoding of chess boards combined with CNNs can perform really well, including alphazero, lc0, maia, etc. Essentially, chessboards are 2d and the used inputs are sequential. Is it possible that the tokenization of FEN and the direct inputs to transformers are limiting the performance?
- W3: It is unclear what this statement means: “Although the source code is available, there is no (peer-reviewed) research report explaining and comparing Lc0’s methods”. Does this mean lc0 may include Stockfish eval already? Is it still fair to compare with lc0? It would be good if the authors could clarify.
- W4: The labels of the value prediction are ranges of centipawn advantages predicted by Stockfish. Is there any design to balance the labels? And is there any mechanism to make sure the categorical labels are semantically interconnected?
- W5: The methodological contribution of this work is limited. If I understand correctly, the method basically incorporates stockfish evals as training targets without task-oriented designs.
- W6: Ablation studies on the training objectives should be conducted and presented.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to W2-W4.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback.
**The paper claims to aim at approximating Stockfish by distilling its board evaluation into a transformer but evaluates performance w.r.t. chess play rather than Stockfish’s centipawn score, which invalidates this claim. Instead, the paper appears to create a strong chess engine without using explicit search, which, however, underperforms lc0.**
Thank you for pointing out a potential source of confusion!
We do evaluate the learning targets (i.e., the loss over win percentage/centipawns) in Appendix B.4 (Fig. A2 and A3). These loss curves show the quantitative comparison against the discretized ground-truth centipawn score.
We respectfully disagree that our goal was not to distill Stockfish’s value estimation behavior. The objective we *directly* optimize (and report in the appendix) is our model’s prediction error w.r.t. Stockfish’s value estimates, and as a modeling assumption, we chose to phrase this as a discretized prediction problem (similar to distributional RL). This is the main idea behind general algorithm distillation (i.e., minimizing prediction error over an algorithm’s outputs). We mainly focused on two *behavioral* metrics for comparing models since they imply good quantitative predictions and allow easy comparison across learning targets and models. One could think of these metrics as using a chess-relevant distortion function. If, instead, our goal were to build a strong chess engine, we would have used explicit search and a domain-specific transformer. The fact that lc0’s policy net performs ~160 Elo better is interesting but does not undermine the main goal of our paper (which is to carefully investigate various architectural and training factors, not to outperform).
**Why are transformers interesting for this study if prior work showed that convolutional networks achieve strong performance on chess?**
We focus on investigating transformers’ amortized planning capabilities (in the context of chess). While convolutional networks have been used successfully in the past (e.g., AlphaZero, or past versions of lc0), the more recent consensus seems to be that transformer-based architectures are at least equally suitable, perhaps even stronger. For example, the strongest modern lc0 networks are transformer-based, as also pointed out by `R-LZwX`, which is the result of many ablations by the lc0 community. While it is intuitive that 2D convolutions implement good inductive biases for the 2D chess board, it is also plausible that (self-)attention leads to good inductive biases for considering relative relations between sets of relevant pieces regardless of their precise spatial arrangement on the board. To avoid speculation, we also performed an ablation with a convolutional network in our paper (see Table A2).
**What does it mean that lc0’s source code is available but that there is no (peer-reviewed) research report explaining and comparing lc0’s methods? Does this mean lc0 may include Stockfish already? Is it still fair to compare with lc0?**
The difficulty with lc0 is that it is a collection of models (originally re-implementing AlphaZero, but with significant evolution since then) that is developed by an active chess engine community, which makes heavy use of the lc0 blog and discord server to share results. This means that detailed architectural information is often only available via the source code (~90k lines of code), and, more importantly, full details about the training process and data sets are typically not part of the open-source release and are thus simply not available for many lc0 networks (though we cannot rule out that this information could be found on lc0’s discord server). For example, even `R-LZwX`, who seems very knowledgeable about lc0, is not fully aware of all the training details. While the lc0 community has made impressive contributions and progress over AlphaZero, the lack of independent reproducibility due to missing information makes the networks currently less suitable for academic research (one of the key requirements of peer-reviewed research articles is providing all information necessary for independent reproduction of results). One positive side-effect that our paper and dataset/model release hopefully have is to encourage and enable others to evaluate their methods in a way that allows for scientific reproducibility.
Consequently, we do not know whether lc0 includes Stockfish evaluations. However, this does not have an impact on our work and main claims. Similarly, whether comparing to lc0 is “fair” cannot be answered without knowing the full training details of lc0, which is a caveat we clearly state in our paper (lines 190 - 192): "We show these comparisons to situate the performance of our models within the wider landscape, but emphasize that some conclusions can only be drawn within our family of models and the corresponding ablations that keep all other factors fixed."
**Can you conduct ablation studies on the training objectives?**
Yes, we have already conducted the following ablations of the training objectives:
* *loss function* (Table 2): log-loss (classification) vs. HL-Gauss loss (classification) vs. L2 loss (regression)
* *predictor-target* (Table 2): action-value vs. state-value vs. behavioral cloning
* *number of value bins* (Table A2): 16 to 256 bins for the HL-Gauss loss
If the reviewer has any other ablation of the training objectives in mind, we are happy to include it in the next revision of our paper.
**Did you try to balance the labels and make sure that they are semantically interconnected?**
We did not balance the labels as we did not want to perturb the natural game/value distribution. We are not quite sure what the reviewer means by “semantic interconnectedness”. Could you please clarify? Note that we did perform ablations on the number of bins we used for our discretization (see Table A2) and an ablation on a non-discretized loss (see Table 2).
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed explanations and pointers to the resources in the Appendix.
However, I didn't find the answers to "W5: The methodological contribution of this work is limited. If I understand correctly, the method basically incorporates stockfish evals as training targets without task-oriented designs.", which remains my main concern.
By "semantic interconnectedness" I mean that: the training targets are stockfish evals and you used "value binning" to cut the value ranges into "classes". However, the "classes" are dependent. For example, the range 0-10 centipawn advantage would be similar to the range 10-20 but significantly to the range 290-300. The method seems without any design to model such dependencies.
Nevertheless, the rebuttal has solved most of my concerns. I have adjusted my evaluation.
---
Reply to Comment 1.1.1:
Comment: We are pleased to hear that the rebuttal has solved most of the reviewer's concerns and thank the reviewer for adjusting their evaluation accordingly!
We also thank the reviewer for clarifying the term "semantic interconnectedness". We actually spent considerable time addressing the semantic interconnectedness of the classes, and as the reviewer rightly suspected, it turned out to be important for performance. This was precisely the reason for using the HL-Gauss loss [17], which extends the classical cross-entropy loss with label smoothing (see Figure 3 in [18] for an intuitive visualization). We ablated three different loss functions in Table 2: two that model semantic interconnectedness (L2 and HL-Gauss) and one that does not take it into account (i.e., the log loss), with the HL-Gauss loss achieving the highest performance.
We admit that this important point was not given enough attention in the current manuscript, and we will clarify it in the next revision of our paper.
We hope that this addresses the reviewer's remaining concerns. | Rebuttal 1:
Rebuttal: We thank the reviewers for their detailed comments and positive feedback.
We are pleased that the reviewers consider our paper well-written (`R-wU89`, `R-LZwX`, `R-Ye4U`) and a “timely addition to the literature of learning non-trivial algorithms” (`R-LZwX`, `R-tXJ6`), our methodology solid (`R-wU89`), our experiments extensive “with a wealth of different metrics and interesting ablations” (`R-LZwX`, `R-tXJ6`), and our open-source benchmark dataset “very useful for future research” (`R-Ye4u`, `R-tXJ6`).
Here, we summarize our response to the common questions. We respond to the individual questions below every review.
`R-wU89`, `R-LZwX`, `R-tXJ6`: **What is the motivation for and main contribution of this work?**
The motivation for our work is to understand transformers’ capabilities of amortizing planning via supervised learning. We conduct a case study using chess as a testbed because it is very well studied, memorization is futile even at large scale, and strong chess engines use fairly sophisticated algorithms for which it is not trivially clear that transformers can easily learn to mimic them. As `R-LZwX` points out, the computer chess community has made similar previous and parallel investigations, but not necessarily with the same aim and scientific reproducibility in mind. Hence, we believe that a rigorously executed case study with well-documented details adds value and facilitates future research and scientific comparison of approaches. An important part of this (but not the only contribution) is crafting a benchmark dataset (and documenting exactly how the data is collected and how evaluations are performed).
We have made the following changes (in italics) to our introduction to clarify the motivation and goals of our paper:
* L38: *The goal of our paper is thus to investigate to which degree vanilla transformers can be trained to mimic Stockfish’s value estimation algorithm by minimizing prediction error (log-loss) and what quantitative impact architectural parameters and training design choices have on this capability.* To *scientifically* address this question, we created ChessBench, a large-scale chess dataset created from 10 million human games that we annotated with Stockfish 16.
* L43: The resulting policies are capable of solving challenging chess puzzles and playing chess at a high level (grandmaster) against humans *with Blitz time controls*.
* L51: *We note that building a strong chess engine is not our primary goal (we use game-playing-related metrics, i.e., playing strength and puzzle-solving accuracy, as performance measures since they are highly correlated with good value estimates and allow for easy comparison with other models). Some of our findings are qualitatively known in the chess engine community, particularly the Leela Chess Zero community, and we do compare against one of their state-of-the-art transformer-based models, which performs slightly better than our largest model. Since the training details of this network are opaque, a direct comparison must be taken with a grain of salt, and we instead perform extensive ablations on our model to ensure that all non-ablated parameters are kept fixed. To easily enable further research and improve standardization and reproducibility, we release our dataset and evaluation metrics as a benchmark, ChessBench, and provide some initial results via our models and ablations and the models we compared against.*
`R-wU89`, `R-Ye4U`: **The methodological contributions are limited because there are no domain-specific modifications.**
Indeed, we do not make domain-specific modifications to the architecture/training protocol, but that is deliberate and one of our main design principles: Our goal is to provide an evaluation of transformers in their standard, tried-and-tested setup, rather than building the best possible chess architecture (which would require domain-specific tweaks). | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length | Accept (poster) | Summary: The paper introduces Megaladon, an improvement on the existing technique Mega. This technique uses a diagonalizable complex moving average to allow for integration of information across a longer context efficiently.
Strengths: - The technique is validated against modern models across a large variety of benchmarks
- The diagonalizable complex moving average seems like an easy win for efficient computation and improvement
- The scale of the models produced is quite large and provides clear signal, while also producing useful model artifacts
Weaknesses: The evaluation is lacking in some key aspects. If these are remedied, the paper is strong.
- There needs to be more comparison against MEGA, this seems critical to establish technical novelty beyond operating at a larger scale than the Mega paper.
At scale, for the core NLP benchmarks, how does Megaladon compare Mega and Mega chunk? What changes actually make an improvement? Ablations in particualar around these design choices seem key to establishing the technical novelty, since the changes are somewhat small and it's not clear they buy you a large gap in improvement, or which pieces do.
I have looked at the Imagenet Mega comparison, and those in the appendix, but this is not at scale, which is a critical element of the paper.
- The paper spends much time comparing against models of shorter context lengths, while it is clear the unlimited inference context length is an advantage, Mega had this too. A compute matched setting should still be compared against. In particular, How does Llama trained with let's say a ~28k context (or whatever context yields the same 1.3x speed improvement of Megaladon) perform in comparison at the scale of the main experiments on the main benchmarks. The monotonic perplexity in Figure 5 is nice to see, but the perplexity other methods at their native or extended context lengths should be included, the minimum perplexity by megaladon might be higher than those achieved by other methods at shorter context lengths.
- A commonly targeted use case of models like this is direct byte level modeling, it would be interesting to see how this method performs on that task. Many of the methods in the long context section were designed with this purpose explicitly in mind. This seems like an application which is important to test on, particularly if context extension is viable in this domain using this method, that would be very interesting.
- Mamba state space models seem to exhibit naturally good context extension, since this method can be seen as a state space model it might be good to compare in the long sequence domain at a similar scale to the 3b Mamba-[1,2] models.
While it might be infeasible to include all of the above, the first 2-3 seem of particular importance to establish the usefulness and technical novelty of this technique.
Technical Quality: 3
Clarity: 3
Questions for Authors: Why do not start to modeling improvements until well over 1T training tokens, is this a byproduct of the learning schedule or of something structural within the model?
How much compute is added over Mega/Mega-Chunk to incorporate the necessary changes?
How is the complex moving average implemented?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations should be addressed further and more directly in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your time and constructive comments! We appreciate your positive feedback on the good motivation, novelty of Megalodon and its strong empirical results. We address your concerns and questions below and please let us know if you still have concerns after you read our response.
> W1: There needs to be more comparison against MEGA, this seems critical to establish technical novelty beyond operating at a larger scale than the Mega paper…
The ablation studies on small/mederatel-scale benchmarks are in the [author rebuttal](https://openreview.net/forum?id=XlAbMZu4Bo¬eId=G72sD1BBxi).
We cannot successfully train Mega on a larger-scale setting, due to numerical instability. That is why we did not compare Megalodon with Mega with large scale experiments.
> W2: In particular, How does Llama trained with let's say a ~28k context (or whatever context yields the same 1.3x speed improvement of Megaladon) perform in comparison at the scale of the main experiments on the main benchmarks.
The training flops of Megalodon-7B is similar with LlaMa2-7B with 4K context, because we also used 4K as the chunk size of chunk-wise attention. To keep the same 1.3x speed over Transformer with 32K context length, the context length of full attention should be at most 8K-10K.
> W3: The monotonic perplexity in Figure 5 is nice to see, but the perplexity other methods at their native or extended context lengths should be included, the minimum perplexity by megaladon might be higher than those achieved by other methods at shorter context lengths.
[Previous studies](https://github.com/jzhang38/LongMamba) have investigated the PPL of Transformer and other architectures (such as Mamba) beyond their training context lengths. Without any fine-tuning for length extension, Transformer and Mamba did not achieve the monotonic PPL of Megalodon in Figure 5.
> W4: A commonly targeted use case of models like this is direct byte level modeling, it would be interesting to see how this method performs on that task.
We appreciate your suggestion on evaluating Megalodon on byte-level modeling tasks. In our experiments on PG-19 (Table 5), we compared Megalodon with MegaByte, which is one of the most advanced byte-level architectures. In addition, among the six tasks in LRA (Table 6), the three text-classification tasks are in byte level. These results indirectly demonstrate the effectiveness of Megalodon on byte-level modeling. We leave more advanced byte-level modeling with Megalodon to future work.
> W5: Mamba state space models seem to exhibit naturally good context extension, since this method can be seen as a state space model it might be good to compare in the long sequence domain at a similar scale to the 3b Mamba-[1,2] models.
[Previous study](https://github.com/jzhang38/LongMamba) showed that, without any fine-tuning, the context extension of Mamba is not as good as Megalodon.
> Q1: Why do not start to modeling improvements until well over 1T training tokens, is this a byproduct of the learning schedule or of something structural within the model?
There are different factors to impact the convergence speed at the beginning of training, including peak learning rate, learning rate scheduler and warmup steps. Thus we believe the final pre-trianing loss is more indicative.
> Q2: How much compute is added over Mega/Mega-Chunk to incorporate the necessary changes?
The additional flops of Megalodon over Mega are from complex CEMA (over real-number EMA) and TImestep Normalization (over Layer/RMS Normalization). These additional flops are marginal (< 1%) compared to the total flops in Megalodon.
> Q3: How is the complex moving average implemented?
The implementation of CEMA is similar to that of EMA. We use FFT to compute the outputs (see appendix A in Mega paper for details). To accelerate CEMA, we implemented separated CUDA kernels for the computation of the convolutional kernel and FFT.
---
Rebuttal Comment 1.1:
Title: Nearly There
Comment: Hi,
Thank you for the thoughtful response.
The ablations help, and knowing that the changes were critical to allowing Mega to achieve larger scales is good. Knowing that there are byte level classification present.
For the monotonic perplexity plot, can you please add in a y-axis scale and the the perplexities of the other techniques at their native context length. The moving average makes it clear that generalization is possible, but it's critical to know the range of improvement we can actually expect. The actual improvement may not be that much though.
With this final change, I will raise my score.
Thank you
---
Reply to Comment 1.1.1:
Title: Re: Nearly There
Comment: We appreciate your positive response!
> For your suggestion on reporting PPLs of other architectures at their native context length.
The problem is that the PPLs from different models on the held-out validation dataset are not directly comparable, since these models were pre-trained on different training data. The only comparable model/architecture is LlaMa2-7B because it was trained on exactly the same 2T data with Megalodon. However, LlaMa2 only supports 4K context length. On 4K context length, the PPL of LlaMa2-7B on the validation data in Figure 5 is $5.09$ while the corresponding PPL of Megalodon is $4.98$. Thus, even at a relatively short context length of 4K, Megalodon slightly outperforms LlaMa2 on PPL. | Summary: Megalodon i.e. Mega2 improves over Mega by using (1) complex-valued EMA; (2) improved normalization schemes (e.g. timestep normalization, attention normalization, pre-norm with 2-hop residuals).
Strengths: - Timestep normalization is simple and reasonable. A highly-optimized CUDA kernel provided in this work could be very influential - it is possible to be as popular as layernorm/groupnorm in the future.
- Gated attention is verified in large-scale training setting for the first time.
Weaknesses: - There are no ablation studies in moderate-scale language modeling at all
- Figure1: the training sequence length is not the same, thus the perplexity comparison might be unfair.
- Lacking discussions & comparison to recent hybrid (local) attention and RNN models: e.g. Griffin, Jamba
- Long context evaluation is not comprehensive: the paper claims unlimited context length, while it is unclear about the effective context length. It is better to provide the Needle in the Haystack results
Technical Quality: 3
Clarity: 3
Questions for Authors: - Complex-valued linear recurrence is replaced by gated linear recurrence for many recent works: e.g. LRU (complex) -> RG-LRU (real gated). HGRN1 (complex) -> HGRN2 (real gated). S4 (complex) -> Mamba1 & 2 (real gated). Did you try real-valued gated linear recurrence layers?
- What's your opinion on Sliding Window Attention vs. Chunk Attention?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your time and constructive comments! We appreciate your positive feedback on the good motivation, novelty of Megalodon and its strong empirical results. We address your concerns and questions below and please let us know if you still have concerns after you read our response.
> W1: There are no ablation studies in moderate-scale language modeling at all
The ablation studies on small/mederate-scale benchmarks are in the [author rebuttal](https://openreview.net/forum?id=XlAbMZu4Bo¬eId=G72sD1BBxi).
> W2: Figure1: the training sequence length is not the same, thus the perplexity comparison might be unfair.
Though the training sequence length is not the same, the Megalodon-7B model is using 4K attention chunk size, which is the same as the context length of full attention in LlaMa2-7B. Thus Megalodon-7B and LlaMa2-7B in Figure 1 were trained on the same data with similar flops/token, yielding relatively fair PPL comparison.
> W3: Lacking discussions & comparison to recent hybrid (local) attention and RNN models: e.g. Griffin, Jamba
Griffin and Jamba are two concurrent works and they are trained in different scales of model size and data.
> W4: Long context evaluation is not comprehensive: the paper claims unlimited context length, while it is unclear about the effective context length. It is better to provide the Needle in the Haystack results
We conducted experiments to evaluate Megalodon on retrieval-oriented tasks, such as passkey retrieval. Similar to previous studies, due to chunk-wise attention, Megalodon under-performed on these retrieval-oriented tasks compared with full attention mechanism, but outperformed state-space models such as Mamba.
For example, without any fine-tuning for length extension, Megalodon completes the passkey retrieval task with up to 16K context length, while Mamba can only complete this task up to 4K context. Long-LlaMa2, which continually trains LlaMa2 on selected long-context data for length extension, is able to complete up to 32K context length.
Further improving Megalodon for retrieval-oriented tasks is an interesting and important direction for future work.
> Q1: Complex-valued linear recurrence is replaced by gated linear recurrence for many recent works: e.g. LRU (complex) -> RG-LRU (real gated). HGRN1 (complex) -> HGRN2 (real gated). S4 (complex) -> Mamba1 & 2 (real gated). Did you try real-valued gated linear recurrence layers?
Gated linear recurrence, such as Mamba, HGRN1 etc, incorporates input-dependent recurrence, which sacrifices efficiency compared to (complex) EMA and S4. We did not try gated linear recurrence mainly because Megalodon leverages chunk-wise attention mechanism, which might down-weight the necessity of input-dependent recurrence.
> Q2: What's your opinion on Sliding Window Attention vs. Chunk Attention?
Sliding Window Attention is theoretically more powerful but less efficient than Chunk-wise Attention. Moreover, sliding window attention increases the difficulty of parallel training along the chunk/context parallel groups. It is an interesting direction for future work to efficiently incorporate sliding window attention into Megalodon.
---
Rebuttal Comment 1.1:
Comment: Regarding ablations, it is well known that performance on wikitext-103 can be sensitive to regularization since the dataset is relatively small, can we have some experiments on Slimpajama?
---
Reply to Comment 1.1.1:
Title: Re: Official Comment by Reviewer 6wcE
Comment: We appreciate your suggestion on conducting ablation studies on Slimpajama. However, due to the limits of time and computation resources, we cannot provide the results during rebuttal period. We will consider to add this study in our final version.
In addition, though the performance on WikiText-103 might be impacted by regularization, we want to say that we have fully swept hyper-parameters for the models in our ablation study to ensure fair comparison. The results in our table are better than previous state-of-the-art numbers, demonstrating the reliability of our ablation study. | Summary: This is an empirincal paper. The paper presents MEGALODON, a neural architecture designed to overcome the quadratic complexity and weak length extrapolation of Transformers. By extensive experiments, this paper demonstrates MEGALODON's ability to efficiently handle unlimited context lengths and its superior performance across different tasks and modalities. The claimed key contributions of this paper is as follows::
1) improves upon the MEGA architecture by adding the complex exponential moving average (CEMA), timestep normalization, normalized attention, and a pre-norm with two-hop residuals.
2. achieves better pretraining efficiency and downstream task accuracy than Transformers, specifically in handling long sequences with 7 billion parameters and 2 trillion training tokens.
3. outperforms Transformers across various benchmarks, including long-document comprehension, multi-turn conversation, and video generation.
Strengths: 1. The performance of the proposed architecture is excellent, as demonstrated in Table 1.
2. Extensive experiments are conducted to evaluate the proposed methods.
3. The topic of this paper is important and critical for the LLM domain.
Weaknesses: 1. The training curves of MEGALODON and LLAMA2 7B in Figure 1 cross at 750 billion training tokens. It would be interesting to see if they cross again with more tokens, such as at 6 trillion.
2. Section 3.2 is difficult to understand due to the lack of background and intuitive explanations.
3. Section 3.5 makes overclaims about 4D parallelism. This topic is well-explored and relevant, as discussed in [1, 2, 3], but these references are ignored. If the paper aims to highlight the benefits of 4D parallelism, it should include comparative experiments.
4. The code link provided in the abstract does not work.
5. The long context tasks in Section 4.3, as shown in Tables 2 and 6, are not fairly set up. In Table 2, the only fair baseline is LLAMA2-L, which performs better. This raises doubts about the proposed method's long-context ability since other models (Yarn, MPT, Xgen) use different training datasets, potentially limiting their long-context capability. For Table 5, the baselines (Transformer, Reformer, ...) also use different training datasets, making the comparisons misleading. Compared to MEGA, the improvements are marginal. Thue the compression of the long-ctx ability is not convincing to me.
[1] Sequence Parallelism: Making 4D Parallelism Possible\
[2] Lightseq: Sequence level parallelism for distributed training of long context transformers\
[3] USP: A Unified Sequence Parallelism Approach for Long Context Generative AI
Technical Quality: 2
Clarity: 3
Questions for Authors: see Weaknesses
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: see Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your time and constructive comments! We appreciate your positive feedback on the good motivation, novelty of Megalodon and its strong empirical results. We address your concerns and questions below and please let us know if you still have concerns after you read our response.
> W1: The training curves of MEGALODON and LLAMA2 7B in Figure 1 cross at 750 billion training tokens. It would be interesting to see if they cross again with more tokens, such as at 6 trillion.
We did not have sufficient computational resources to compare Megalodon and Transformer on a scale of 6T training tokens. But based on the trends of the learning curves in Figure 1, we believe that the gap between Megalodon and Transformer might not be narrowed with more training data.
> W2: Section 3.2 is difficult to understand due to the lack of background and intuitive explanations.
As discussed in the beginning of Section 3.2, the motivation of Timestep Normalization is to perform normalization along the axis of time steps, which serves the motivation of reducing internal covariate shift along the spatial dimension, similar to Batch Normalization and Group Normalization. The effectiveness of Timestep Normalization has been shown in the [author rebuttal](https://openreview.net/forum?id=XlAbMZu4Bo¬eId=G72sD1BBxi).
> W3: Section 3.5 makes overclaims about 4D parallelism. This topic is well-explored and relevant, as discussed in [1, 2, 3], but these references are ignored. If the paper aims to highlight the benefits of 4D parallelism, it should include comparative experiments.
Thanks for pointing out the related work we missed. In fact, we have cited [2] in our submission. We will add the other two works in our final version.
In Section 3.5, we want to highlight the benefit of chunk-wise attention in parallel pre-training. As explored in [1-4], advanced algorithms have been developed for distributed computation of full attention. However, these algorithms involves significant communication costs. Benefiting from the chunk-wise attention in Megalodon, there are no communications of attention along the chunk/context parallel groups.
[1] Sequence Parallelism: Making 4D Parallelism Possible
[2] Lightseq: Sequence level parallelism for distributed training of long context transformers
[3] USP: A Unified Sequence Parallelism Approach for Long Context Generative AI
[4] Ring Attention with Blockwise Transformers for Near-Infinite Context
> W4: The code link provided in the abstract does not work.
The anonymous link is a placeholder, not a real link. We will release the code in the final version.
>W5: The long context tasks in Section 4.3, as shown in Tables 2 and 6, are not fairly set up. In Table 2, the only fair baseline is LLAMA2-L, which performs better. For Table 5, the baselines (Transformer, Reformer, ...) also use different training datasets, making the comparisons misleading. Compared to MEGA, the improvements are marginal. Thue the compression of the long-ctx ability is not convincing to me.
In fact, the model that was trained on the same data with Megalodon is LlaMa2, not LlaMa2-L. LlaMa2-L continually trained LlaMa2 on 500B selected long-context data for length extension. That is the partial reason why LlaMa2-L performs slightly better in Table 2.
For other long-context models, such as Yarn, Xgen and MPT, we agree that they are trained on different data and the comparison is not entirely fair. However, it is impractical to re-train all these models with the same data.
For the results in Table 6 on LRA, all the models (Transformer, Reformer, etc.) are trained on the same datasets for each task. Thus, the comparison in Table 6 is fair.
When we compare Megalodon-chunk with Mega-chunk in Table 6, the improvements are significant (87.62 vs. 85.66).
In Table 6, the improvements of Megalodon over Mega with full attention are less significant than that of the chunk-wise attention models. It is reasonable because the benefits of Megalodon over Mega might be covered by the full attention mechanism. | Summary: The paper proposes Megalodon, which introduces three advancements over Mega: complex EMA, timestep normalization, and normalized attention. These advancements address the limitations of chunk-wise attention and architecture divergence across different tasks and data types. The new model is evaluated alongside Llama-2, both trained on the same public dataset, and demonstrates competitive and superior performance across a wide range of benchmarks.
Strengths: 1. Clear motivations: All three improvements directly target the limitations of Mega.
2. The complex EMA is a novel approach.
3. The authors provide efficient parallelism.
Weaknesses: Recent theoretical work [1] has shown that efficient versions of attention (like the chunk-based method used in this paper) can limit the expressiveness of the model, particularly for reasoning tasks that involve long-range information. The paper evaluates Megalodon with long-context open-book QA tasks. How will Megalodon perform on PhoneBook lookup [2] with ICL, especially for phonebook lengths longer than 4K tokens? How will it perform on complex reasoning tasks requiring long context, such as 8-shot or 16-shot math problems with GSM8K or coding tasks on HumanEval?
Empirically, it is not clear how CEMA improves expressiveness. It would be most direct to compare using CEMA versus using EMA on these tasks.
The reviewer understands that the rebuttal period is short and is therefore not requiring most of these experiments to be added.
[1] Yang, Kai, et al. "Do Efficient Transformers Really Save Computation?" Forty-first International Conference on Machine Learning.
[2] Jelassi, Samy, David Brandfonbrener, and Sham M. Kakade. "Repeat After Me: Transformers are Better than State Space Models at Copying." Forty-first International Conference on Machine Learning.
Technical Quality: 3
Clarity: 3
Questions for Authors: Have the authors conducted ablation studies for small-scale models before moving to 7B, similar to the results in Appendix C? Including these ablation studies, if already performed, would help readers understand how the three designs impact performance.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper does not discuss its limitations. The authors believe there is no negative societal impact. In the paper checklist, justifications are required for answers marked "Yes," but the authors have deleted the justification for several items. For limitations, the authors claim they are discussed, but there is no justification provided.
The anonymous link is not working.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your time and constructive comments! We appreciate your positive feedback on the good motivation, novelty of Megalodon and its strong empirical results. We address your concerns and questions below and please let us know if you still have concerns after you read our response.
> W1 & Q1: Have the authors conducted ablation studies for small-scale models before moving to 7B, similar to the results in Appendix C?
The ablation studies on small-scale benchmarks are in the [author rebuttal](https://openreview.net/forum?id=XlAbMZu4Bo¬eId=G72sD1BBxi).
> W2: Recent theoretical work [1] has shown that efficient versions of attention can limit the expressiveness of the model…
We conducted experiments to evaluate Megalodon on retrieval-oriented tasks, such as passkey retrieval. Similar to previous studies, due to chunk-wise attention, Megalodon under-performed on these retrieval-oriented tasks compared with full attention mechanism, but outperformed state-space models such as Mamba.
For example, without any fine-tuning for length extension, Megalodon completes the passkey retrieval task with up to 16K context length, while Mamba can only complete this task up to 4K context. Long-LlaMa2, which continually trains LlaMa2 on selected long-context data for length extension, is able to complete up to 32K context length.
Further improving Megalodon for retrieval-oriented tasks is an interesting and important direction for future work.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal and additional ablation studies. After reviewing the new information, I am maintaining my original scores and continue to recommend acceptance. | Rebuttal 1:
Rebuttal: # Ablation studies on CEMA and Timestep Normalization
**Ablation on LRA**
We first conducted ablation studies on LRA to demonstrate the effectiveness of CEMA and Timestep Normalization components in Megalodon. The results are shown in the following table:
| Models | ListOps (LN) | Text (SN) | Retrieval (SN) | Image (BN) | Pathfinder (BN) | Path-X (BN) | Avg. |
| :--------- | :----------------: | :----------------: | :----------------: | :----------------: | :----------------: | :----------------: | :----------------: |
| Mega (EMA) | 58.76 | 90.19 | 90.97 | 85.80 | 94.41 | 93.81 | 85.66 |
| Megalodon (CEMA) | 61.13 | 90.58 | 91.51 | 87.32 | 96.11 | 96.98 | 87.37 |
| Megalodon (CEMA&TSN)| 62.25 | 90.50 | 91.76 | 87.16 | 96.85 | 97.21 | 87.63 |
In this study, we used the Mega architecture as the baseline, which uses different normalization layers for different tasks in LRA. The normalization layers for different tasks are in the brackets following the task names in the table headline. BN, LN and SN refer to Batch Normalization, Layer Normalization and Scale Normalization.
The second row in the above table is the Megalodon architecture by replacing EMA with CEMA. For this architecture, we used the same normalization layers with the original Mega on different tasks. The third row is the Megalodon architecture with both CEMA and Timestep Normalization (TSN). Note that for this architecture, we used the same TSN for all the six tasks in LRA.
**Ablation on WikiText-103**
We then conducted ablation studies on auto-regressive language modeling on the moderate WikiText-103 dataset:
| Models | #Param. | PPL |
| :-----------------------------| :------------: | :--------: |
| Mega (EMA&LN) | 252M | 18.07 |
| Megalodon (CEMA&LN) | 252M | 17.63 |
| Megalodon (CEMA&TSN) | 252M | 17.23 |
In this study, we also used the Mega architecture with EMA and Layer Normalization (LN) as baseline. The second row is Megalodon with CEMA and Layer Normalization, while the third row is the Megalodon with CEMA and Timestep Normalization (TSN).
From the two ablation studies, both CEMA and Timestep Normalization show improvements over the original Mega architecture.
# Effect of normalized attention, and pre-norm with two-hop residuals
The normalized attention and pre-norm with two-hop residuals are designed to improve stability of Megalodon in large-scale pre-training. Their effects on small-scale models are not significant. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Random Cycle Coding: Lossless Compression of Cluster Assignments via Bits-Back Coding | Accept (poster) | Summary: This paper proposes the random cycle coding that achieves optimal rate by bits-back coding techniques for cluster assignments. In addition, the newly proposed algorithm requires less computing resources, making it more practical.
Strengths: For the cluster assignments, this paper proposes an optimal compression coding by bits-back coding techniques. In the coding view, the result is significant due to the optimal property. The RCC algorithm needs less computing and memory costs, making it more practical.
Weaknesses: This paper borrows bits-back coding techniques and random order coding, making this paper quite incremental in the view of techniques. The difficulty in combing these two techniques is not clearly presented.
Technical Quality: 3
Clarity: 3
Questions for Authors: This paper stresses the application in cluster assignments. Is there any other potential application? What is the special feature in cluster assignment that is compatible with RCC?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: No. The limitations are not discussed explicitly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > This paper borrows bits-back coding techniques and random order coding, making this paper quite incremental in the view of techniques. The difficulty in combing these two techniques is not clearly presented.
RCC is a very different algorithm from ROC.
- ROC is a cluster-compression method, and was readapted to be used as baselines, as there are currently no baselines for optimal cluster compression.
- ROC is the only existing optimal set compression algorithm, to the best of our knowledge, while RCC compresses arbitrary clusters, or partitions, of datapoints.
- RCC efficiently maps each possible partitioning of datapoints to a subset of permutations with a common cycle structure. ROC does not in any way utilize the cycle structure of permutations to communicate information.
The fact that RCC can re-use ROC is a feature and not a limitation of the contribution. A significant effort was made to understand and bridge both methods, so that RCC could be built on top of the efficient implementations available for ROC.
> This paper stresses the application in cluster assignments. Is there any other potential application? What is the special feature in cluster assignment that is compatible with RCC?
In its most general form, RCC is an algorithm for compressing random partitions of arbitrary sets (see [1] for background). The insight provided by this paper is that it is possible to map cluster assignments to the cycle structure of permutations, where each disjoint cycle defines one of the sets in the cluster.
We will add an example to the appendix highlighting how cycles can be used to store the information of cluster membership for arbitrary data.
[1] https://www.stat.uchicago.edu/~pmcc/pubs/partition10.pdf
---
Rebuttal Comment 1.1:
Title: Thank the authors for the rebuttal.
Comment: The rebuttal addresses my major concerns and I raise the score accordingly. | Summary: ## Summary
* This paper propose an entropy coding technique named RCC, which is the first one to achieve the optimal rate for cluster assignment. Theoretical and empirical results show that the rate saving and speed up of the proposed approach over previous suboptimal work ROC is evident when the number of cluster is close to sample size.
Strengths: ## Strength
* The paper presents a entropy coding technique that achieve optimal rate for cluster assignment, which is a step forward the previous work on multiset.
Weaknesses: ## Weakness
* Despite its theoretical optimality, the bitrate saving of RCC over ROC is a bit of marginal, especially when the number of clusters are not so large compared with number of elements. For a less extreme case when number of cluster is much smaller than number of elements (~\sqrt{n}), the difference between RCC and ROC can be quite small. (See Fig 3, n=10^7, cluster = 10^7 and cluster = 10^3.5).
* Similarly, the RCC approach has obviously better complexity compared with ROC, only when we have number of clusters close to number of samples. For the case when #cluster ~\sqrt{n}, the difference of temporal complexity can be less significant.
Technical Quality: 3
Clarity: 3
Questions for Authors: ## Questions
* What is the common practical #clusters compared with #samples? Is that common to have #clusters close to #samples?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the relevant practical questions.
Most questions center around a comparison to ROC. Indeed there is a regime where the compression gain our method provides, RCC, is marginal over ROC (when $k$ is small). However, there are regimes where the gain is substantial (when $k$ is large). In all regimes, our method outperforms ROC while also being faster.
Given that RCC provides both better compression rates and, guaranteedly, better wall-times, as well memory and computational complexities, the message we would like to highlight is: **in all regimes of $k$, there is no reason to use ROC over our method.**
> Despite its theoretical optimality, the bitrate saving of RCC over ROC is a bit of marginal, especially when the number of clusters are not so large compared with number of elements. For a less extreme case when number of cluster is much smaller than number of elements (~\sqrt{n}), the difference between RCC and ROC can be quite small. (See Fig 3, n=10^7, cluster = 10^7 and cluster = 10^3.5).
The performance of RCC is comparable to that of ROC-2, but RCC significantly outperforms ROC-1. In the regime of $k \approx \sqrt(n)$, where $k$ is the number of clusters, the RCC can provide gains of up to 20% with respect to ROC-1 on benchmark datasets such as SIFT1M.
> Similarly, the RCC approach has obviously better complexity compared with ROC, only when we have number of clusters close to number of samples. For the case when #cluster ~\sqrt{n}, the difference of temporal complexity can be less significant.
Indeed RCC is always better, both in terms of compression ratio and speed of execution, than ROC for any configuration of clusters. The difference in performance diminishes as the number of clusters decreases, and achieves equality at the extreme $k=n=1$ where all elements are in the same set/cluster. For the typical scenario of vector databases, where $k \approx \sqrt(n)$, RCC is at least 20% faster (according to the experiments in Figure 2) than ROC, while achieving the same or better compression ratio.
> What is the common practical #clusters compared with #samples? Is that common to have #clusters close to #samples?
In FAISS [1] the practical regime is $k = c \cdot \sqrt(n)$, where $c$ is some constant. The actual value will depend on the specific nature of the database and the algorithm used to perform vector compression.
While this regime for $k$ is the desired one, in practice, there are distribution shifts that happen in production. Initially, the assignment algorithm, i.e., the algorithm that decides where a new vector should be placed in the database, is trained on some training set targeting $k = c \cdot \sqrt(n)$. During deployment, when a new vector needs to be added, if the real-world distribution changes, there is no guarantee that this target will be met. The number of clusters can drift over time to have significantly more, or less, cluster. RCC provides a way to robustly compress the database, irrespective of this distribution shift, while ROC does not. | Summary: This paper study the problem of cluster assignments of datasets, the goal of which is encoding datasets along with their cluster information. The authors propose Random Cycle Coding method that utilizes cycle representation of permutation as indicators for cluster. The cycle is formed by a sequence of numbers and it represents a cluster of the data samples corresponding to the numbers in the cycle.
It is shown that the method outperforms other baseline methods while requires less computational complexity and memory.
Strengths: The idea in employing the cycle representation of permutation to assign data to cluster is simple yet effective. The authors formalize the idea and show the effectiveness of their proposed method. The performance of the method is consistently outperforming others.
Weaknesses: Although the main idea is simple to understand, it is quite hard to grasp the underlying techniques like ANS, ROC and bit-back coding from the paper. This paper compares their method only with ROC, which I think is not sufficient.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. What are the differences between RCC and ROC in cluster assignment problem? From my understanding, RCC stores cluster information by permutation. Then, how ROC-1 and ROC-2 store the clusterings?
2. Is ROC only existing baseline method?
3. To leveraging permutation, it seems sequential stacking is required, and parallel stacking or decoding is not allowed. If it is, are there any problems or disadvantages caused by not using parallel operations?
4. When extremely many data samples are stored, it seems that retrieving a cluster information for a data sample requires decoding every data sample if the sample is stacked first. Is this correct?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > What are the differences between RCC and ROC in cluster assignment problem? From my understanding, RCC stores cluster information by permutation. Then, how ROC-1 and ROC-2 store the clusterings?
That is correct, RCC stores the cluster information in the disjoint cycles of the permutation. ROC-1 and ROC-2 compress each cluster separately as a set of objects, so the clustering assignment is implicitly stored by knowing which set you are decoding.
> Is ROC only existing baseline method?
To the best of our knowledge, there is no existing method that addresses compression of clustering assignments. However, a cluster can be seen as a collection of disjoint sets. It is therefore natural to adapt set compression algorithms, such as ROC, to compress clusters.
> To leveraging permutation, it seems sequential stacking is required, and parallel stacking or decoding is not allowed. If it is, are there any problems or disadvantages caused by not using parallel operations?
This is correct: RCC does not allow “random access” to elements in the clusters. This is a limitation of all bits-back coding techniques. We will highlight this point in the camera-ready.
> When extremely many data samples are stored, it seems that retrieving a cluster information for a data sample requires decoding every data sample if the sample is stacked first. Is this correct?
This is partially correct. Since encoding requires randomizing the order, the element you are searching for can be in any position of the stack with equal probability. However, it is not necessary to decode the entire stack to retrieve the element, decoding can be stopped as soon as the element is found.
---
Rebuttal 2:
Comment: Thank the authors for the rebuttal. The rebuttal answers my questions, and I have no further questions. Accordingly, I will raise my score. | Summary: This paper proposes a coding scheme for lossless compression of cluster assignments. The proposed coding scheme is based on random order coding (ROC) with bits-back coding. Analysis and experiments show that it outperforms two variants of ROC in complexity and compression rate.
Strengths: The presentation and research methodology presented in this paper are good, and a comparison with baseline methods is included. Examples are provided for clear explanation.
Weaknesses: After carefully reading this work, I believe it is out of the scope of NeurIPS. This work applies the Random Order Coding (ROC) with bits-back coding for encoding clustering assignments. The main contribution is on the way to adapt the ROC to clustering assignment encoding, which does not include any discussion or consideration on the topics listed on NeurIPS 2024 Call For Paper.
Additionally, the paper does not provide any literature on clustering assignment compression. I am not a database researcher, but is the clustering assignment a significant overhead in practice? According to the author, "the number of bits used to store the vector embedding ranges from 4 to 16 bytes, while ids are typically stored as 8-byte integers" It is very unexpected that, in practice, the id takes up more than half of the storage. Could the author provide more references or elaborate more on this?
Technical Quality: 2
Clarity: 2
Questions for Authors: See weakness
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: See weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > After carefully reading this work, I believe it is out of the scope of NeurIPS. [...] does not include any discussion or consideration on the topics listed on NeurIPS 2024 Call For Paper.
This work fits in the call for papers under “Infrastructure”, in the sub-category “improved implementation and scalability”. Furthermore, there have been many papers on this exact topic at NeurIPS, in the category of “Probabilistic methods”, as well as at sister conferences such as ICML and ICLR (see references).
The target application of this compression algorithm is for vector databases, which are widely used in practice for embedding retrieval in foundation models, as well as similarity search, the most famous of which is FAISS [1]. Vector databases provide fast similarity search via efficient KNN search on accelerated hardware (e..g, GPUs). There is an extensive list of applications currently using vector databases such as for augmenting LLMs with memory (see [3] for an extensive discussion); as well as companies providing vector databases as a service such as Pinecone (https://www.pinecone.io/), Milvus (https://milvus.io/), Amazon Web Services (AWS, https://aws.amazon.com/what-is/vector-databases/), Zilliz (https://zilliz.com/), and many others.
From a theoretical perspective, the main ideas of this paper advance the line of research on bits-back coding—an approach that was invented and has deep roots in the "probabilistic methods" community of NeurIPS; and adapts it to practical settings. See [4, 5, 6, 7, 8, 9, 10,11,12] for papers recently published at NeurIPS, as well as other venues, on bits-back coding
Reducing the memory footprint of vector databases is important for many reasons, and has been highlighted in the official documentation of production-level vector databases [2, 3]. We highlight 2 of these reasons. First, it enables the use of better search algorithms in real-time which require more memory to run. Second, similarity search is usually done on batches of queries in practice; and, therefore, reducing the memory footprint of the database index enables us to increase the batch size and speed up the throughput of retrieval.
- [1] https://github.com/facebookresearch/faiss
- [2] https://github.com/facebookresearch/faiss/wiki/Indexes-that-do-not-fit-in-RAM
- [3] (The Faiss library)[https://arxiv.org/abs/2401.08281]
- [4] (ICLR 2019) [Practical Lossless Compression with Latent Variables using Bits Back Coding](https://arxiv.org/abs/1901.04866)
- [5] (NeurIPS 2021) [Variational Diffusion Models](https://proceedings.neurips.cc/paper/2021/hash/b578f2a52a0229873fefc2a4b06377fa-Abstract.html)
- [6] (NeurIPS 2021) [Maximum Likelihood Training of Score-Based Diffusion Models](https://proceedings.neurips.cc/paper/2021/hash/0a9fdbb17feb6ccb7ec405cfb85222c4-Abstract.html)
- [7] (NeurIPS 2019) [Integer Discrete Flows and Lossless Compression](https://proceedings.neurips.cc/paper/2019/hash/9e9a30b74c49d07d8150c8c83b1ccf07-Abstract.html)
- [8] (NeurIPS 2020) [Improving Inference for Neural Image Compression](https://proceedings.neurips.cc/paper/2020/hash/066f182b787111ed4cb65ed437f0855b-Abstract.html)
- [9] (ICLR 2020) [HiLLoC: Lossless Image Compression with Hierarchical Latent Variable Models](https://arxiv.org/abs/1912.09953)
- [10] (ICLR 2021) [IDF++: Analyzing and Improving Integer Discrete Flows for Lossless Compression](https://arxiv.org/abs/2006.12459)
- [11] (NeurIPS 2019) [Compression with Flows via Local Bits-Back Coding](https://proceedings.neurips.cc/paper/2019/hash/f6e794a75c5d51de081dbefa224304f9-Abstract.html)
- [12] (NeurIPS 2021) [iFlow: Numerically Invertible Flows for Efficient Lossless Compression via a Uniform Coder](https://proceedings.neurips.cc/paper/2021/hash/2e3d2c4f33a7a1f58bc6c81cacd21e9c-Abstract.html)
- [13] (NeurIPS 2021) [Your Dataset is a Multiset and You Should Compress it Like One](https://openreview.net/forum?id=vjrsNCu8Km) - Best paper award NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications.
> Additionally, the paper does not provide any literature on clustering assignment compression. I am not a database researcher, but is the clustering assignment a significant overhead in practice? [...] It is very unexpected that, in practice, the id takes up more than half of the storage. [...]?
The size of the index is composed of the number of bits required to represent the ids plus the number of bits required to represent the vectors. Currently, there is a vast literature on lossy compression of vector databases (see [14] and [15] for a survey), which significantly reduces the size of these high-dimensional vectors to a few bytes. In many cases, the vectors can be reduced to less than 8 bytes per vector (note, not *per-dimension*, but 8 bytes to represent the entire vector) while still maintaining search performance useful for many applications (e.g., see Figure 3 of [14] where Deep1M is compressed to less than 8 bytes per vector). In extreme cases, such as [4], Table 2, the authors show that 4 bytes can be enough to maintain some level of recall performance.
While lossy vector compression has been reasonably explored, very little to nothing has been done for the indices. Typically, users store custom indices together with the vectors (e.g., in FAISS this is done through the `index.add_with_ids` method where `index` is some inverted index). These indices can be of arbitrary size, and are commonly 32 bit (4 byte) or 64 bit (8 byte) integers. The indices therefore can represent a significant share of the memory required to store these databases.
We will add a discussion to the introduction during the camera-ready to highlight that the lossy compression of vectors to less than 8 bytes increases the relevance of the index compression.
- [14] (ICML 2024) [Residual Quantization with Implicit Neural Codebooks](https://arxiv.org/abs/2401.14732)
- [15] https://github.com/erikbern/ann-benchmarks
---
Rebuttal Comment 1.1:
Comment: Thanks for answering my questions. I have no further comments and questions. I will raise my rating. | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper introduces Random Cycle Coding (RCC), a method for lossless compression of cluster assignments in data sets. RCC encodes data sequentially, representing cluster assignments as cycles in permutations, which eliminates the need for artificial labels. The method is shown to achieve the Shannon bound in bit savings, and scales quasi-linearly with the largest cluster size. Empirical results show optimal byte savings across a range of cluster sizes for multiple datasets.
Strengths: 1. The proposed algorithm for encoding cluster assignments is novel and elegantly leverages the properties of clusterings and permutations to achieve theoretically optimal compression.
2. The approach also shows significant savings in experiments including on vector databases which can translate into gains across all machine learning approaches that rely on retrieval from vector databases.
Weaknesses: Some of the technical details are not clearly explained (see question below) and overall, the details of the approach may be difficult to follow for audiences not familiar with the relevant source coding literature. I would recommend adding an example to illustrate how Algorithm 1 works end-to-end and why it works, either in the main paper or in the appendix, to remedy this.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why does the encode + decode time for RCC in Fig. 2 decrease as 'k' increases?
2. If the elements in each cycle are sorted (line 188) then how can the original order of the elements be recovered? If it cannot be recovered, then is this approach only limited to settings where the ordering of database elements is not important?
3. How is the centroid to cluster mapping information stored? I believe it will be needed to identify the clusters in which to search for the k nearest neighbors in the second stage of FAISS right?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the suggestion. An example of encoding and decoding a cluster assignment has been added to the appendix to further clarify the algorithm.
> Why does the encode + decode time for RCC in Fig. 2 decrease as 'k' increases?
This can be understood by noting that, in the experiment, as the number of clusters $k$ increases, the number of elements in each cluster $n_i = \frac{n}{k}$ decreases.
The total complexity of RCC can be read from Algorithm 1, lines 2 and 3. The subroutine that encodes a single cluster of size $n_i$ (i.e., line 2 of Algorithm 1) has complexity $O(n_i \cdot \log n_i) = O(\frac{n}{k} \cdot \log \frac{n}{k})$. Line 3 encodes a single element, which is very fast, for every cluster, adding complexity $O(k)$. As $k$ increases, more and more time is spent on line 3 than line 2 of Algorithm 1.
The total complexity of RCC is $O(k + n \cdot \log\frac{n}{k})$. If $k=1$, then all elements are in a single cluster, and the complexity is $O(n \cdot \log(n))$. At the other extreme, $k=n$, each element is in its own cluster, and the complexity is $O(n)$. In between these extremes, as $k$ increases, there will be more clusters (more time spent on line 3 of Algorithm 1), but every cluster will be smaller (less time spent on line 2 of Algorithm 1), and the latter is computationally more expensive.
We will add this discussion to the camera-ready.
> If the elements in each cycle are sorted (line 188) then how can the original order of the elements be recovered? If it cannot be recovered, then is this approach only limited to settings where the ordering of database elements is not important?
This is correct: the order can never be recovered, but note this is by design. We assume clusters are sets of objects, i.e., unordered collections. The fact that elements have some order is due to the nature of how we represent information on a computer (i.e., memory is inherently sequential).
This is the case for similarity search databases such as FAISS, for example, where the freedom to reorder vectors in a cluster/voronoi cell, as well as reordering the centroids (as long as it still aligns with the correct cluster).
We will highlight this point in the camera-ready.
> How is the centroid to cluster mapping information stored? I believe it will be needed to identify the clusters in which to search for the k nearest neighbors in the second stage of FAISS right?
This is an important practical consideration, which is compatible with RCC. Line 1 in Algorithm 1 sorts the elements according to Foata’s Canonicalization (Definition 3.3). The centroids must be re-ordered to align with the respective centroid.
We will add a discussion on what quantities we assume can be re-ordered to the introduction in the camera-ready.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for addressing my concerns. As I had already recommended accepting the paper, I will keep my score. | null | null | null | null | null | null |
A test of stochastic parroting in a generalisation task: predicting the characters in TV series | Reject | Summary: The authors present an analysis of logistic regression on sentence embeddings as a way to predict the speaker of a particular line of dialogue in the Big Bang Theory. Specifically, they fit a PCA model to embeddings obtained from a sentence transformer and then use each PCA dimension as a linear feature. The authors present some qualitative analysis of the most predictive PCA dimensions and some quantitative analysis of the classification accuracy. In addition, they present a brief analysis of the ability of GPT-4 to directly classify lines of dialogue and compare to a limited user study.
Strengths: The authors have identified an interesting and important debate in the AI community — more tests which help researchers discriminate between mere stochastic parroting and true generalization are certainly needed! In addition, the authors are very thorough in their description of the methods involved and their qualitative analysis of the PCA features is extensive. I also appreciate the thought the authors have given to the limitations of their study and the need for further work.
Weaknesses: First and foremost, I feel that this paper needs to be much clearer and more focused in its research question. The introduction indicates that the objective of the study is to determine the extent to which the apparent ability of large language models like GPT-4 to generalize to novel tasks is actually attributable to their ability to parrot data from their training. However, a good part of the analysis appears dedicated to the specifics of The Big Bang Theory and the features of its dialogue. Section 3.1, for instance, extensively interrogates the PCA dimensions obtained from the sentence embeddings in a way that feels very specific to the particular dataset. Similarly, the conclusion raises claims that the ability for logistic regression to predict the speaker with reasonable accuracy is due to stereotyping in the characterization of the show. These claims are potentially warranted given the experimental evidence (though a more detailed and statistically-motivated analysis would be necessary to make such claims with certainty), but feel as though they belong in a different paper (a potentially quite interesting paper for a different venue, I should add). The connection between these results and the initial framing of LLM evaluation remain, unfortunately, somewhat murky. This is not to say there is no possible link between dialogue speaker prediction and LLM abilities! I encourage the authors to think about this problem more and articulate the specific claim they hope to interrogate.
On that note, and assuming that the main motivation is indeed to study large language models, I feel that the analysis could be strengthened. First, it would be helpful to justify some of the specific decisions made as part of evaluation. For instance, why were the Big Bang Theory and Friends selected over other possible dialogue datasets? Why was dialogue speaker prediction always studied between exactly two characters? Why were these specific characters selected? Do the characters have a similar amount of lines, or are there other statistical biases in the dataset that might affect the results? When proposing a novel task, it’s important to make the assumptions and decisions that went into the task selection clear.
With regards to human evaluation, I encourage the authors to widen their study. That is to say, a user study which consists of only two participants (both of whom are related to one of the authors) makes it difficult to ascertain the reliability of the results. Indeed, I would suggest a study consisting of a larger number participants (ideally participants who do not have any externally motivating factors like relationships to the authors) so that a more general measure of human ability can be obtained. Further, I think it could actually be preferable for the participants to not have prior experience with the television show. This would make the test more an examination of the ability for participants to generalize their knowledge of personality traits to a novel situation instead of their ability to recall information (which is, ostensibly, closer to the desired research question in LLMs).
Despite these critiques, I hope that the authors continue to refine their research question, justification, and methodology. There are interesting questions to study here!
Technical Quality: 1
Clarity: 2
Questions for Authors: **Questions**
- See above for questions on task selection
- How does this classification task differ from those previously studied? e.g. those used in https://arxiv.org/abs/2309.07755 or https://arxiv.org/abs/2402.07470
- Is there any effect of the prompt on the downstream results? (For GPT-4)
- Is there any effect of random seed on the downstream results? (All models)
**Notes**
- The paper could benefit from an additional round of proofreading (e.g. “analyze” in Section 2.2 —> “analysis”, “annex” —> “appendix” throughout the paper)
- The paper contains links to a few public GitHub repositories, which technically may violate double-blind review. The authors should be careful about including identifiable information in anonymous submissions
Confidence: 4
Soundness: 1
Presentation: 2
Contribution: 1
Limitations: I feel that the authors have been very up front with the limitations of their work and have situated it in the context of broader impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | null | Summary: The authors are focused on whether or not LLMs can be thought of as stochastic parrots or contain "Sparks of AGI". They look into what kind of data is recoverable from internal LLM representations. Specifically, the authors investigate to what extent the task of identifying TV personalities (e.g. Penny vs Sheldon) based on their dialogue lines is solvable using various methods. The authors compare a classifier based on PCA components extracted from existing LLM embeddings, GPT-4 zero shot performance, and human expert judgments. They find that all methods show fairly good performance, with human experts showing best results, followed by GPT-4, followed by the classifier. The authors also present a brief qualitative analysis, interpreting the more prevalent axes of variation in the embeddings identified using PCA.
Strengths: The authors tackle a very ambitious and important problem. The writing is clear throughout, and the authors provide extensive background for the methods they use.
Weaknesses: I need to preface this by saying that I hope that my negative review does not discourage the authors from further pursuing the topic. I feel bad for having to reject this paper as it has some good ideas behind it and has an intention of researching a highly important problem. I hope that in next iterations, their work can be improved and expanded. At present, unfortunately, it does not match publication standards. I will try to explain why, and give pointers on how to potentially fix it in the future.
The biggest flaw of the paper is the experiment design. The authors never clearly define what exactly it means to be a "stochastic parrot" as opposed to "general intelligence". The authors also don't explain how their experiments would help to decide one way or another. So the results we have are impossible to interpret. It would help to go back to the original question and work through the argumentation more clearly. If the internal LLM representations have information about TV personalities, does it make them more or less of a stochastic parrot and why.
Otherwise, the experiments give a very exploratory impression. For example the authors run PCA on sentence embeddings computed on their dataset and interpret the components. But it's unclear why and how this would help to answer the main question the paper attempts to answer.
Additionally, the paper's methods are extremely well-known, but unfortunately, the authors don't refer to relevant literature. The work highly overlaps with the topic of linear and nonlinear probes, as well as with the general theme of transfer learning. In essence, what the authors did can be described as adding a "classification head" to a pre-existing LLM. This is a very well-known technique.
If we want to gain new insights into what the models are doing, it is usually more interesting to look into the computations in intermediate layers of the model, rather than the last embedding layer. It is also often desirable to look at causal probes (rather than just a classifier).
Lastly, there are certain writing choices that deviate from common "conventions" in academic publishing. For example, oftentimes the authors go into excessive detail on well-known methods (explaining how PCA works and what a covariance matrix is). I highly suggest that the authors look at existing successful papers that use similar methods and copy their approach when it comes to decisions on what to explain in the main text, what to put into the appendix, and what to omit. The general rule of thumb is that newly introduced and important ideas should be at least briefly given in the main text, with extra details given in the appendix. Extremely well-known and established methods such as Principal Component Analysis don't need a full explanation, and a simple reference to the original source is enough.
I really hope that the authors don't get discouraged and try to refine and improve their research in the future. The first starting point would be to more clearly define the problem, and to study in depth the existing literature on linear probes and probing in general, and on investigating what the internal LLM representations contain. One potential starting point is the paper "Evaluating the World Model Implicit in a Generative Model", Vafa et al. 2024 and related works.
Technical Quality: 1
Clarity: 3
Questions for Authors: How exactly do we go from studying internal representations to the conclusions about stochastic parroting? For example, there are Neuroscience works that show that one can recover a lot of what a person is thinking about (see the studies related to the "Grandmother cell" idea). A lot of visual input can be reconstructed as well. Does it make humans stochastic parrots as well?
Basically, if GPT-4 can answer a given question (Penny vs Scheldon), we already know that its internal representations contain information needed to answer that. Same goes for human brains. I don't fully understand how it relates to the question of "stochastic parrotedness".
Confidence: 5
Soundness: 1
Presentation: 3
Contribution: 1
Limitations: The authors acknowledge some of the limitations of their study.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2
Code Of Conduct: Yes | null | Summary: This paper’s main contribution is to apply a logistic regression on the principal components of the LLM embeddings for classifying TV series characters based on their dialog lines. The main finding is the logistic regression approach does worse than GPT-4 in predicting TA characters, but is comparable to human evaluations with two annotators.
Strengths: The paper focus on an interesting angle of using language model features for predicting the belongings of dialog lines of characters of TV shows.
Weaknesses: The methodology of using logistic regression over PCA of language model embeddings is not novel, and there's no rigorous quantitative evaluations of the method beyond qualitative examples. The connection of the method and task to the broad discussion around "spark of AGI" and "Stochastic Parrots" is farfetched.
Technical Quality: 1
Clarity: 1
Questions for Authors: Could you address the technical novelty of the method proposed?
Confidence: 5
Soundness: 1
Presentation: 1
Contribution: 1
Limitations: The paper claims that "the contribution of the paper is primarily methodological, and their study is limited to a qualitative study of two very specific datasets." However, the method they adopt is a fundamental ML technique, which lacks novelty.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 1
Code Of Conduct: Yes | null | Summary: The paper aims to prove LLMs work as "stochastic parrots" (Bender et al) rather than "sparks of agi" (Bubeck et al). To prove this claim, the paper presents an experiment where a task can be solved by training a linear model (logistic regression) on top of PCA of the LLM output. The authors then claim, based on the linear model experiments, that the LLM doesn't exhibit any sparks of agi due to the ability of (nearly) solving the task using linear models.
Strengths: The authors show a simple linear model trained on the output of an LLM for a given task is good enough to solve it, compared to using a GPT4 model, raising questions on the supposed intelligence often ascribed to the model.
Weaknesses: While I generally agree that LLM's are closer to "stochastic parrots" than "sparks of agi", the claim that it can be proved using the proposed PCA experiments is weak to me.
- Firstly, the embeddings are essentially the output of the LLM in question (all-MiniLM-L6-v2) - I would call it outputs rather than embeddings, as embeddings just indicate input word embeddings to the model, which clearly here isn't the case.
- Secondly, the outputs itself being feature rich to be used for classification is unsurprising. It is expected the principal components of this embedding would be useful in predicting the properties of the task (as shown in the projection of PCA plots). This just shows the underlying model (SentenceBERT here) is good at extracting rich semantic and syntactic features from the input sentence (probing literature essentially proves that [1]).
- Lastly, the experiment also shows the representations extracted from the sentence embedding model is sufficient for the task. For a harder task, if the linear probe on all-MiniLM-L6-v2 was not good with respect to GPT4, that would also not conclusively prove the ability of GPT4 is due to any sparks, rather it can be explained that GPT4's own embedding features are richer. That is, a linear probe trained on GPT4 embeddings for a harder task would also likely mimic its own performance. (this is theoretical, as neither the author or anyone other than OpenAI have access to their embeddings)
[1] https://aclanthology.org/D19-1250/
Technical Quality: 2
Clarity: 3
Questions for Authors: None
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 1
Limitations: There are no explicit limitation section, however the last paragraph of conclusion discusses it.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Toward Robust Incomplete Multimodal Sentiment Analysis via Hierarchical Representation Learning | Accept (poster) | Summary: The paper addresses the issue of missing modality information in multimodal systems and proposes solutions for two key problems:
1. Excessively complex feature interactions lead to information redundancy and cumulative errors.
2. Previous works did not align representations semantically.
In enhancing multimodal representations, the core approach utilizes the transformer’s encoder and decoder. It strengthens inter-modal interactions post-encoder to ensure semantic relevance is maintained in the decoder's reconstruction.
The model achieves state-of-the-art (SOTA) performance and includes detailed ablation studies and experimental analyses.
Strengths: 1. The model considers the differences between semantic and modality representations, utilizing the transformer’s encoder and decoder processes for reconstruction to facilitate inter-modal interactions.
2. It accounts for the impact of Mutual Information (MI) on the learned representations, using MI to enhance the quality of these representations.
3. The paper conducts thorough experiments, addressing the model's robustness when adjusting the modality missing ratio.
Weaknesses: 1. The use of numerous symbols in both the figures and text increases the complexity and time required for understanding. DIFFICULT TO READ
2. In the introduction, the problem is not clearly stated; it is too brief. It only specifically mentions SMIL's approach without thoroughly analyzing the issues present in previous methods.
3. It appears that the model relies heavily on MRM (Modality Stochastic Missing) for masking certain information to generate task-specific data. This raises concerns about the model's dependency on MRM. If MRM focuses on masking emotional words, does it hinder the model's understanding?
4. The paper does not consider using large models to address this problem, nor does it compare the performance with that of large models.
5. The HMI (Hierarchical Mutual Information) module applies Bengio's MI concepts rather straightforwardly, lacking in innovation.
6. HAL (Hierarchical Adversarial Learning), presented as a separate contribution, does not make a significant impact in terms of performance or design. The hierarchical aspect merely reflects multi-scale representation.
Technical Quality: 3
Clarity: 1
Questions for Authors: Application Scenario: In the introductory example, if the crucial information "bored" is missing in L (Linguistic), A (Acoustic), and V (Visual) modalities, how is it still relevant to determine the task? What basis is used for this determination?
Model Design: Why is it necessary to include a modality encoder in the model? Ensuring semantic interactions across different modalities should be sufficient.
Did the baselines you compared against also use the SAME masking strategy as you did?
Confidence: 4
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: See Weaknesses and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**: About symbolic representation in figures and texts.
**A1**: Thank you for the reminder! It is necessary that we use symbols in figures and text to represent the data flow and workflow of each component in the framework. We will improve the representation in the revision to make it easier to read.
***
**Q2**: About problems in the previous methods.
**A2**:
* As stated in lines 34-38 in the main manuscript, we have summarized the common problems in previous methods:
* performing complex feature interactions on missing modalities leads to a large amount of information redundancy and cumulative errors, which affects the effectiveness of affective semantics extraction.
* The lack of consideration of semantic alignment and distributional alignment during feature reconstruction results in the inability to accurately recover features and produce robust joint multimodal representations.
* In this paper, we only take SMIL as an example, which also suffers from the above issues and is not a special case. We promise to improve the representation in the revision.
***
**Q3**: If MRM focuses on masking emotional words, does it hinder the model's understanding?
**A3**: Thank you for your comment! We need to clarify that:
* the MRM process is stochastic and does not mask out some specific words, and this stochastic paradigm effectively enhances the model's ability to deal with complex modality missing situations.
* When MRM is discarded, the decreased performance of the model proves its indispensability and importance, as shown in Table. 4 in **reply.pdf**. The testing conditions include Intra-Modality Missingness (Intra-MM), Inter-Modality Missingness (Inter-MM), and Complete Modality (CM).
***
**Q4**: Please compare the proposed framework with the larger models.
**A4**: Insightful comments. We want to emphasize the current immaturity of using large models to solve MSA tasks with three modalities: language, audio, and video. We did our best to select VideoLLaMA-1/2 that has good support for all three modalities in Table 5 of **reply.pdf** for comparison experiments.The testing conditions include Intra-Modality Missingness (Intra-MM), Inter-Modality Missingness (Inter-MM), and Complete Modality (CM). We find it difficult for large models to deliver significant gains when modality semantics are incomplete. In contrast, large models show promising potential for traditional perception tasks when all modalities are available.
***
**Q5**: Please describe the technical contributions of HMI and HAL.
**A5**: We offer the following technical clarifications:
* The concept of mutual information and adversarial learning are not simply combined and used, but are specifically designed to address the problems and limitations of existing MSA methods under uncertain missing modality. Specifically, existing methods lack effective supervision of semantic alignment and distributional alignment during feature reconstruction, resulting in the inability to accurately restore missing sentiment semantics, as stated in lines 37-38 & 87-90 & 157-160 & 184-188 in the main manuscript. In contrast, the proposed interaction paradigm guides the reconstruction of missing sentiment information both at the semantic level and at the distributional level, thus recovering realistic modality features as much as possible.
* HMI and HAL bring favorable performance gains, as shown in Table 3 in the main manuscript and Table 6 in the Appendix, which demonstrate the necessity and importance of both mechanisms.
***
**Q6**: Application Scenario: In the introductory example, if the crucial information "bored" is missing in L (Linguistic), A (Acoustic), and V (Visual) modalities, how is it still relevant to determine the task? What basis is used for this determination? Model Design: Why is it necessary to include a modality encoder in the model? Ensuring semantic interactions across different modalities should be sufficient. Did the baselines you compared against also use the SAME masking strategy as you did?
**A6**: Thank you for your comments!
* Application scenario: When critical information "bored" from all three modalities is missing, the model is still able to give judgments based on the following two grounds:
1) The inherent imbalance in the label distribution in the MSA task (more positive than negative samples) leads the model to potentially rely on label bias as a statistical shortcut to perform predictions. Bias-driven predictions contain task-relevant a priori information to some extent [1].
2) For multimodal sequential data with temporal asynchrony, MSA is often able to capture global contextual dependencies among elements during temporal modeling to provide task-relevant contextual semantics.
* Model design: modal encoders are used to unify dimensions and provide refined modal representations for subsequent semantic interactions. For a fair comparison, the baseline and our framework use the same mask strategy.
[1] Yang, Dingkang, et al. "Towards multimodal sentiment analysis debiasing via bias purification." In ECCV 2024. | Summary: The paper addresses the challenge of data incompleteness in Multimodal Sentiment Analysis (MSA). It introduces a novel approach called the Language-dominated Noise-resistant Learning Network (LNLN). The LNLN leverages the dense sentiment information in the language modality, considered the dominant modality, to improve robustness across various noise scenarios. It features two main components: a dominant modality correction (DMC) module and a dominant modality-based multimodal learning (DMML) module, which enhance the quality of the dominant modality representations. The model's performance was evaluated using datasets like MOSI, MOSEI, and SIMS, demonstrating superior robustness and accuracy compared to existing baselines. The comprehensive experiments provide new insights and a thorough comparative analysis in the context of incomplete data, advancing the field of MSA.
Strengths: The introduction of the Language-dominated Noise-resistant Learning Network (LNLN) is innovative, addressing the issue of data incompleteness effectively by prioritizing the language modality, which is typically rich in sentiment information.
The authors conduct thorough experiments on well-known datasets (MOSI, MOSEI, SIMS), adding credibility to their findings. The detailed comparison with existing methods under diverse noise scenarios is particularly valuable.
The use of the Dominant Modality Correction (DMC) module and Dominant Modality Based Multimodal Learning (DMML) module is well-justified and systematically enhances the model’s robustness by ensuring the quality of dominant modality representations.
Weaknesses: The focus on language as the dominant modality, while justified, may not generalize well to scenarios where other modalities (like visual or auditory) are equally or more critical. This could limit the applicability of the model to certain types of data or tasks.
Technical Quality: 4
Clarity: 4
Questions for Authors: Extending the approach to consider scenarios where visual or auditory data might be dominant could improve the versatility and applicability of the model.
More detailed ablation studies on MOSEI and SIMS would provide deeper insights into the workings and benefits of the proposed model.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**: About model applicability in different scenarios.
**A1**: Many thanks to the reviewer for the constructive suggestions. We would like to clarify several points.
(1) In the MSA task, the language modality contains more refined and rich sentiment semantics than the other modalities, and thus language plays a dominant role in MSA. (2) The training paradigm designed in this framework can effectively capture the complementary sentiment semantics among heterogeneous modalities, which enhances the applicability and scalability of the proposed model under multiple data types and multiple tasks. (3) Limited by rebuttal time, we promise to extend the proposed approach to other modality-dominated scenarios to provide comprehensive insights and perspectives in future work.
***
**Q2**: Add more detailed ablation studies.
**A2**: Valuable Recommendations! The results of the ablation study of the proposed framework on the MOSEI and SIMS datasets are shown in Table. 3 in **reply.pdf**. We will add these ablation experiments to the revision.
*** | Summary: The paper presents the Representation Factorization and Alignment (ReFA) framework for Multimodal Sentiment Analysis (MSA) under uncertain missing modalities. ReFA employs a fine-grained representation factorization module to extract sentiment-relevant and modality-specific representations through crossmodal translation and sentiment semantic reconstruction. It introduces a hierarchical mutual information maximization mechanism to align and reconstruct high-level semantics incrementally. Additionally, a hierarchical adversarial learning mechanism progressively aligns latent distributions to create robust joint multimodal representations. Experiments on three datasets show that ReFA significantly enhances MSA performance under both uncertain missing-modality and complete-modality conditions.
Strengths: Strength:
1. One of the challenges in multimodal sentiment analysis is the potential for missing modality information in real-world scenarios. This study addresses this practical issue by proposing an effective algorithm with notable real-world applicability.
2. The motivation for the research is clearly articulated, pinpointing the shortcomings of existing studies with strong logical coherence.
3. The proposed algorithm achieves state-of-the-art (SOTA) results across relevant datasets, which validates its effectiveness to a significant extent.
Weaknesses: Weakness:
1. The idea and design of Intra- and Inter-modality Translations are sound; however, the implementation of the translation loss is overly simplistic and lacks a task-specific approach, making the methodology appear somewhat naive.
2. Similarly, the Sentiment Semantic Reconstruction section suffers from the same issue, with a basic and unrefined approach that fails to leverage the complexity of the task.
3. Both sections give the impression that while Translations and Reconstruction are being performed, the methods are indistinguishable aside from their goals. This indicates a lack of differentiation in handling the unique characteristics of each type of information.
4. The HMI and HAL components seem to merely apply two loss functions to the multi-scale features of the teacher-student network. This approach is quite common in knowledge distillation and thus lacks significant innovation.
5. Moreover, the paper lacks relevant case studies to validate the effectiveness of the proposed algorithm. There is also an absence of error analysis to identify the limitations and shortcomings of the method.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The idea and design of Intra- and Inter-modality Translations are sound; however, the implementation of the translation loss is overly simplistic and lacks a task-specific approach, making the methodology appear somewhat naive.
2. Similarly, the Sentiment Semantic Reconstruction section suffers from the same issue, with a basic and unrefined approach that fails to leverage the complexity of the task.
3. Both sections give the impression that while Translations and Reconstruction are being performed, the methods are indistinguishable aside from their goals. This indicates a lack of differentiation in handling the unique characteristics of each type of information.
4. The HMI and HAL components seem to merely apply two loss functions to the multi-scale features of the teacher-student network. This approach is quite common in knowledge distillation and thus lacks significant innovation.
5. Moreover, the paper lacks relevant case studies to validate the effectiveness of the proposed algorithm. There is also an absence of error analysis to identify the limitations and shortcomings of the method.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The current distillation framework relies on a teacher network trained with complete modality data. A potential limitation of this approach is that its applicability to scenarios with missing modalities is inherently constrained by the performance ceiling of the teacher network. This dependency may limit the effectiveness and generalizability of the framework in handling diverse cases of missing modality data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**: About modality translation and sentiment semantic reconstruction.
**A1**:
* Both modality translation and sentiment semantic reconstruction are designed for MSA tasks. Specifically, the core idea of modality translation is to utilize transitions among different modalities to achieve an effective extraction of sentiment-relevant representations. The purpose of the sentiment semantic reconstruction process is to ensure that the translation produces representations that can still contain sentiment semantics consistent with the original.
* The design of translation loss and reconstruction loss is simple but effective, and the complex loss is difficult to train and performs poorly, the experimental results are shown in Table. 2 in **reply.pdf**. The testing conditions include Intra-Modality Missingness (Intra-MM), Inter-Modality Missingness (Inter-MM), and Complete Modality (CM). Obviously, the design in this paper has the best performance. They center on the form of supervision rather than the loss function.
***
**Q2**: About the technical contribution of HMI and HAL.
**A2**: Thank you for your comments! We need to clarify that:
* The multiscale feature-based knowledge distillation paradigm is indeed general and effective. However, the focus of this paper is not on the innovation of the knowledge distillation framework, but on fully utilizing the effective multiscale supervision paradigm in knowledge distillation to achieve hierarchical constraints between representations and missing feature reconstruction.
* Distinguishing from the traditional simple constraint approach based on multi-scale features (\emph{e.g.}, L2 distance), we propose a joint alignment mechanism based on mutual information maximization at the semantic level and adversarial learning at the distributional level.
* Instead of simply employing HMI and HAL, we address the lack of effective alignment of semantics and distributions in the feature reconstruction process for existing MSA methods under the missing modality cases, as stated in lines 37-38 & 87-90 & 157-160 & 184-188 in the main manuscript. The ablation experiments in Section 4.4 in the main manuscript and Section A.3 in the Appendix demonstrate the superiority of the proposed method.
***
**Q3**: About the case studies and error analysis.
**A3**: Valuable suggestions!
* In order to better demonstrate the effectiveness of the proposed method, we used two challenging cases for case studies as shown in Fig. 3 in **reply.pdf**, where the underlined blue words may express emotional polarity and the missing modality is marked with a red dotted line.From the figure, we can find: 1) In E1, all models generate correct results though the visual modality is missing. Due to the strong guidance of the textual word "amazing", the positive This case reveals that the conventional approaches can be well-performed when existing modalities express the same explicit semantics. 2) In E2, the textual modality expresses positive polarity, while the visual modality tends to be negative because of the frown and close lips on facial features. It is really hard to determine the polarity when the acoustic modality is missing. Specifically, Self-MM and CubeMLP misclassify the emotion as negative. Specifically, Self-MM and CubeMLP misclassify the emotion as negative, and the other approaches except ReFA all predict positive sentiment in terms of the dominance of the language modality. modality. In contrast, our framework recognizes correctly. This advantage stems from the present framework's factorization and capture of sentiment- relevant semantics, as well as the hierarchical knowledge distillation module's alignment of semantics and distributions, which accurately reconstructs missing features and produces robust joint multimodal representations.
* We have conducted T-tests in Table. 1 and Table. 2 in the main manuscript, and the stable and highly significant experimental results demonstrate the superiority of the proposed method. Furthermore, we have described the limitations of our framework, as stated in the Section A.6 in the Appendix.
***
**Q4**: About the generalization of the framework using the teacher network.
**A4**: Valuable insights!
* We have clarified this limitation, as stated in Section A.6 in the Appendix. In the future, we will strive to optimize the generality of the approach, e.g., by using the self-distillation paradigm in the framework.
* The teacher network is trained on the original complete-modality data, which serves as a high-quality reference that transfers holistic knowledge contained in the complete samples to the student network. The student network accurately recovers the missing semantics during semantic and distributional alignment to the teacher network.
* The teacher network covers the information of the complete modality, and this supervised paradigm is effective and generalizable in a variety of modality missing scenarios, thus enhancing the generalization of the student network.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and thorough explanations. I have revised my rating to 8.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer JfYH
Comment: We thank the reviewer for the meticulous advice! | Summary: The paper addresses the challenges of multimodal sentiment analysis (MSA) in real-world applications, particularly when some modalities may be missing, which can hinder the effectiveness of the analysis. The authors propose a framework called Representation Factorization and Alignment (ReFA) to tackle the issue of uncertain missing modalities in MSA.
The ReFA framework consists of three core components:
1. Fine-grained Representation Factorization (FRF) module: This module extracts valuable sentiment information by factorizing each modality into sentiment-relevant and modality-specific representations through cross-modal translation and sentiment semantic reconstruction.
2. Hierarchical Mutual Information (HMI) maximization mechanism: This mechanism incrementally maximizes the mutual information between multi-scale representations to align and reconstruct the high-level semantics in the representations.
3. Hierarchical Adversarial Learning (HAL) mechanism: This mechanism progressively aligns and adapts the latent distributions of the representations to produce robust joint multimodal representations.
The authors conducted comprehensive experiments on three datasets, demonstrating that the ReFA framework significantly improves MSA performance under both uncertain missing-modality and complete-modality testing conditions.
Strengths: Originality:
1. The paper proposes a Representation Factorization and Alignment (ReFA) framework to address multimodal sentiment analysis under uncertain missing modalities.
2. Introduces innovative components like fine-grained representation factorization, hierarchical mutual information maximization, and hierarchical adversarial learning.
Quality:
1. Comprehensive experiments on three datasets (MOSI, MOSEI, IEMOCAP) demonstrate significant performance improvements.
2. Ablation studies validate the effectiveness of each proposed component.
3. Qualitative analysis with visualizations provides intuitive understanding of the framework's robustness.
Clarity:
1. The paper is well-structured, with clear sections on related work, methodology, and experiments.
2. Figures and tables effectively illustrate the framework and results.
Significance:
1. Addresses an important real-world challenge of missing modalities in multimodal sentiment analysis.
2. Shows consistent performance improvements over state-of-the-art methods across different missing modality scenarios.
3. The framework's robustness to both intra-modality and inter-modality missingness enhances its practical applicability.
Weaknesses: 1. The paper lacks a detailed discussion on the computational complexity and runtime performance of the proposed framework compared to existing methods.
2. While the proposed ReFA framework is innovative, the individual components (such as mutual information maximization and adversarial learning) have been explored in other contexts. The novelty primarily lies in their specific combination and application to MSA with missing modalities.
3. The paper did not mention the models that were used for the final classification or regression, only mentioned feature extraction models.
4. The paper doesn't discuss potential limitations of the approach or cases where it might not perform well.
5. There's no discussion on the framework's generalizability to other multimodal tasks beyond sentiment analysis.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can the authors provide more details on the computational requirements and training time of ReFA compared to baseline methods?
2. How does the performance of ReFA change with varying amounts of training data? Is there a minimum data requirement for the framework to be effective?
3. Have the authors explored the applicability of ReFA to other multimodal tasks beyond sentiment analysis? If not, what modifications might be needed?
4. Could the authors provide insights into why the language modality seems to be particularly effective in unimodal scenarios?
5. Are there any scenarios or types of data where ReFA might not perform well? It would be helpful to discuss potential limitations.
6. How does the framework handle noisy data within the available modalities? Can the authors provide experimental results or discussions on the impact of noisy data on the performance of ReFA?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The paper does not explicitly address limitations or potential negative societal impacts of the work. Some suggestions for improvement:
1. Societal Impact: While the paper mentions broader impacts and limitations in the appendix, the main text lacks a detailed discussion on potential negative societal impacts. The authors should consider elaborating on ethical concerns, such as the potential misuse of sentiment analysis in sensitive applications or privacy issues related to multimodal data collection.
2. Discuss potential biases in the datasets used and how they might affect the model's performance across different demographic groups.
3. Acknowledge any limitations in the generalizability of the results to real-world, non-curated data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**: About computational complexities.
**A1**: Thank you for your comments! We need to clarify that we have compared and discussed the proposed framework with the existing methods in terms of the number of parameters, FLOPs, and performance in three testing situations, as shown in Section A.4 of the Appendix. The proposed framework has the strongest robustness against missing modalities but has the lowest FLOPs and a lower number of parameters than most of the methods, which achieves a reasonable trade-off between complexity and performance.
***
**Q2**: About the technical contribution of the proposed components.
**A2**: We need to clarify that:
* Mutual information maximization and adversarial learning are not simply combined and employed but are specifically designed to address the problems and limitations of existing MSA methods under missing modality situations. Specifically, existing methods lack effective supervision of semantic alignment and distributional alignment during feature reconstruction, resulting in the inability to precisely recover sentiment semantics, as stated in lines 37-38 & lines 87-90 & lines 157-160 & lines 184-188 in the main manuscript. In contrast, the proposed interaction paradigm guides the reconstruction of missing sentiment information both at the semantic level and at the distributional level, thus restoring realistic modality features as much as possible.
* Our proposed components bring significant performance gains, which are essential in the framework. The ablation results in Section 4.4 in the main manuscript and Section A.3 in the Appendix demonstrate this advantage.
***
**Q3**: Description of the models used for the classification or regression.
**A3**: Thank you for your reminder! The models used for classification and regression are fully-connected layers, including two linear layers, a ReLU activation layer, and a Softmax layer (in the case of classification tasks). We promise to add detailed descriptions in the revision.
***
**Q4**: Discuss the potential limitations or cases where it might not perform well.
**A4**:
* We need to clarify that we have already discussed some of the limitations of this paper, i.e., the framework is based on teacher network trained on complete-modality data, and thus its performance depends to some extent on the upper bound of the teacher network's performance, as stated in A.6 in the Appendix. In the future, we plan to use the self-distillation paradigm to improve the flexibility and applicability of the model.
* Furthermore, in real-world applications, modality missing cases can be very intricate and complex, leading to a possible minor loss in model performance. In the future, we will explore more complex modality-missing cases to compensate for this deficiency. We will add this limitation in the revision.
***
**Q5**: Discuss the framework generalization to other multimodal tasks.
**A5**: Valuable suggestions! We added comparison experiments of our framework with some of the baselines on humor detection (i.e., UR-FUNNY dataset) and sarcasm discovery (MUSTARD dataset) tasks, as shown in Table. 1 in **reply.pdf**.
The testing conditions include Intra-Modality Missingness (Intra-MM), Inter-Modality Missingness (Inter-MM), and Complete Modality (CM). The superior experimental results demonstrate the generalization of the framework to multiple multimodal tasks.
***
**Q6**: About ReFA performance with data size.
**A6:** The framework has a strong generalization to datasets of different sizes for the following reasons:
* We have launched comprehensive experiments on the MOSI, IEMOCAP, and MOSEI datasets of sequentially increasing sizes, as shown in Section 4.3 in the main manuscript.
* In addition, we have conducted experiments based on different ratios of samples in the MOSI dataset, as shown in Fig. 1 in **reply.pdf**. The experimental results are the average under five different random seeds.
***
**Q7**: The reason for Language being effective in unimodal scenarios.
**A7**: Constructive Comments. We provide two insights below:
* Compared to linguistic modalities, non-linguistic modalities are potentially unreliable because audiovisual feature extraction tools typically introduce additional redundancy and noise.
* As highly abstract symbolic systems, linguistic modalities typically contain more information and knowledge density, providing more effective structured semantic representations.
***
**Q8**: About the noise effect on ReFA performance.
**A8**: We add various ratios of Gaussian noise to the MOSI dataset, and Fig. 2 in **reply.pdf** demonstrates the robustness of the proposed framework against noisy data, as the factorization mechanism adequately captures sentiment cues and filters out noise.
***
**Q9**: About potential social impacts.
**A9**: Thanks for the comments! We will add a detailed discussion of social implications to the main revision manuscript.
***
**Q10**: About potential bias in the dataset.
**A10**: In practice, we observe two possible biases in the datasets, including label bias and context bias.
* The label bias usually occurs when the number of training samples for a specific category is more significant than for other categories. Such unbalanced data distribution would lead to trained models relying heavily on label bias as statistical shortcuts to make inaccurate predictions across demographic groups.
* The context bias emerges when trained models exhibit strong spurious correlations between specific categories and context words in language modality. MSA models tend to predict samples containing those words to an incorrect category based on biased statistical information rather than intrinsic textual semantics.
***
**Q11**: About result generalizability.
**A11**: Non-curated data in real-world applications may contain more intricate cases of missing modalities, leading to a slight performance loss of the model. We will add this description in the revision.
---
Rebuttal Comment 1.1:
Title: Maintaining Positive Assessment After Thorough Rebuttal
Comment: Thank you for your comprehensive rebuttal. I appreciate the time and effort you have invested in addressing each point raised in my review. After carefully considering your responses, I still hold a positive assessment of the paper.
Regarding computational complexity (Q1), I am pleased to see that you have included a comparison of parameters and FLOPs in the Appendix, which effectively addresses my concerns about computational requirements. On the topic of classification and regression models (Q3), I appreciate the details provided about the models used. Including this information in the main text will undoubtedly enhance clarity.
I am also glad to hear that you will be adding discussions on social impacts and dataset bias. These additions will significantly improve the comprehensiveness of the paper. Your acknowledgment of potential performance loss in more complex real-world scenarios demonstrates a balanced perspective on the result's generalizability.
Overall, your response has effectively addressed my concerns and questions. The additional experiments and planned revisions will further strengthen the paper, resulting in a more comprehensive and impactful contribution to the field of multimodality. In light of your thorough response and planned changes, I maintain my original assessment of the paper.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer MG71 zmpp
Comment: Thank you for the valuable suggestions and recognition of our work.
We promise to add the following to the revision: additional experiments, a description of the models used for classification or regression, a discussion of potential societal impacts, an analysis of potential biases in the dataset, and an acknowledgement of the generalizability of the results.
We will endeavor to make more valuable contributions to the multimodal community. | Rebuttal 1:
Rebuttal: We thank all reviewers for their suggestions and thoughtful comments.
Pdf: /pdf/6daf91580339650fa181d30a385e1a2e39bdb5d3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Generalized Multimodal Fusion via Poisson-Nernst-Planck Equation | Reject | Summary: The paper introduces CrossCheckGPT, a novel method for assessing hallucination robustness in multimodal foundation models without requiring reference standards. Utilizing cross-system consistency, the proposed method aims to provide a universal evaluation framework capable of being applied across various domains and tasks. This approach contrasts significantly with traditional hallucination assessments, which rely on comparison with gold-standard references and are limited to specific domains.
Strengths: 1. The introduction of a reference-free universal hallucination ranking method addresses a significant gap in the evaluation of foundation models, particularly in new and emerging areas.
2. The paper effectively demonstrates the method's versatility across different modalities (text, image, and audio-visual), enhancing its relevance to a wide range of applications.
3. The development of the AVHalluBench, the first audio-visual hallucination benchmark, is a noteworthy contribution that sets a new standard for evaluating models in this complex domain.
Weaknesses: 1. The analysis on how different models' outputs are compared and the implications of these comparisons could be more detailed. Specifically, the paper lacks a deeper exploration into the sensitivity of CrossCheckGPT to variations in model architecture or training data.
2. Lack of additional visual representations of the data flow or examples of the hallucination checks.
3. Lack of a more comprehensive set of benchmarks, including more direct comparisons with state-of-the-art methods.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. How does the CrossCheckGPT handle discrepancies in the quality of evidence models, especially when these models have different training backgrounds or data biases?
2. Is there a quantitative measure of the 'distance' or difference between the outputs of various models that CrossCheckGPT uses to assess consistency?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The effectiveness of the method hinges on the diversity and independence of the models used as evidence sources. This dependence could pose challenges in scenarios where similar or homogenous models are prevalent.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and provide feedback. However, it seems there might have been a mix-up, as your comments appear to address a different paper.
---
Rebuttal 2:
Title: Not relevant review!
Comment: Dear TT1j. The end of the discussion period is close. It seems that you provided a review for another paper. | Summary: The paper proposes a novel multimodal fusion method named Generalized Multimodal Fusion (GMF), which leverages the Poisson-Nernst-Planck (PNP) equation from physics to manage the feature fusion process in multimodal learning tasks. By treating features as charged particles, the method allows for a dynamic separation and recombination of modality-specific and modality-invariant features, thereby enhancing the fusion process and reducing the entropy in downstream tasks. This approach addresses common challenges in multimodal learning, such as feature dimension consistency, data integrity, and adaptability across various tasks.
Strengths: 1. The application of the PNP equation, traditionally used in physics to describe the dynamics of charged particles, to multimodal feature fusion is highly original.
2. The paper is grounded in a solid theoretical framework that is well-articulated and robust.
Weaknesses: 1. The method, while innovative, appears to be complex in terms of implementation, particularly in how features are treated as charged particles. This complexity might limit its accessibility or usability for practitioners not familiar with the underlying physical equations.
2. The paper could benefit from more rigorous quantitative analysis, including statistical significance tests and error analysis. Such analyses would provide a clearer picture of the method's performance relative to benchmarks.
3. Can the authors provide the results on multimodal datasets with text-image modalities [1] ?
[1] Provable Dynamic Fusion for Low-Quality Multimodal Data.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to the weakness.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. Due to word count limitations, please refer to the "rebuttal to all" for some common issues. Here are our simplified responses:
## Key Concepts
### Feature Charge
Features are treated like charged particles in an electric field, moving in specific spaces for fusion.
### Feature Bipolarity
Like ions in solutions: $\text{NaCl}$ and $\text{KCl}$ have identical chloride ions ($\text{Cl}^-$), representing modality-invariant features. Sodium ($\text{Na}^+$) and potassium ($\text{K}^+$) ions are modality-specific features. The PNP equation can handle scenarios abstracted into particles of different polarities.
### Bipolarity in Feature Fusion
Decompose data into modality-specific and modality-invariant features, process separately, then fuse. This is like separating ions and recombining them into new compounds.
### GMF Fusion Process
GMF uses the Poisson-Nernst-Planck (PNP) equation to simulate feature movement in a high-dimensional space for fusion:
1. **Dissociation**: Decompose features into modality-specific and invariant components, like dissolving $\text{NaCl}$ and $\text{KCl}$ into ions.
2. **Concentration**: Apply an electric field to concentrate ions back into their original compounds, ensuring lower information entropy while maintaining meaning.
3. **Reconstruction**: Recombine features to ensure no information loss, like recombining $\text{Cl}^-$ from A with $\text{Na}^+$ from B to form $\text{NaCl}$.
## Add Text Modality in ActivityNet (Main Paper, Table 3)
We additionally included text features in the image-video matching experiment, where multiple modalities were cyclically reconstructed. Only GMF with two input sets is compared, as other methods can't adjust three sets simultaneously. Results are shown below:
| Method | Modality | mAP@10 | mAP@20 | mAP@50 | mAP@100 | Params | FLOPs |
|-----------------------|----------|--------|--------|--------|---------|-----------|--------|
| GMF (128) | V-I | 0.349 | 0.335 | 0.323 | 0.308 | 0.33M | 0.00G |
| GMF (128) | V-I-T | 0.349 | 0.335 | 0.325 | 0.311 | 0.49M | 0.00G |
| GMF (4096-128) | V-I | 0.355 | 0.341 | 0.327 | 0.315 | 119.21M | 0.23G |
| GMF (4096-128-768) | V-I-T | 0.358 | 0.345 | 0.327 | 0.317 | 123.63M | 0.25G |
The text model output is 768 dimensions. GMF (128) maps text and video features to the same dimension as image features, while GMF (4096-128-768) uses 4096-dimensional video, 128-dimensional image, and 768-dimensional text features. Text features guide image and video features for precise matching, with low-dimensional text features adding linear complexity and high-dimensional text features adding less than linear overhead.
## Evaluation on Text Modality with QMF[1]
The baselines for the experiments involved directly averaging the prediction results and then applying the CEloss.
### QMF Baseline, (1,3) Pooling
| Method | ACC(%) |
|---------------|--------|
| Baseline | 92.78 |
| QMF | 93.20 |
| GMF | 91.34 |
Experiments based on QMF standards (6144d image, 768d text). The lower performance of GMF with the same environment as QMF is attributed to redundant features in long dimensions not fitting the assumptions of the PNP equation. QMF demonstrates strong potential when feature representations are redundant and contain noise.
### QMF Baseline, Global Pooling
| Method | ACC(%) |
|---------------|--------|
| Baseline | 92.66 |
| QMF | 79.10 |
| GMF | 89.92 |
We observed that the pooling method used by QMF tends to generate higher-dimensional features. Under the typically outputted pooled (2048d image) features, our experimental results are shown in the table. In this scenario, QMF exhibits a certain performance drop, whereas GMF shows an improvement. However, even in large models, 6144-dimensional features for downstream tasks are quite rare.
### Low-Dimension Features via ResNet-18
| Method | ACC(%) |
|------------|--------|
| Baseline | 91.36 |
| QMF | 77.10 |
| GMF(EMTs) | 90.97 |
| GMF(NMTs) | 93.78 |
In further experiments with a ResNet-18 feature extractor and global pooling, GMF's performance remained stable despite reduced feature dimensions and layers. NMTs outperformed EMTs due to their ability to bypass irrelevant information in text modality features. This highlights GMF's adaptability to various tasks and datasets.
It is quite normal for the results of fusion methods to be lower than the baseline due to the significant noise in the text features of the dataset. As an image classification dataset, the best way to utilize text features, if necessary, is to train them similarly to CLIP. For example, the NMT mode of GMF.
### Evaluation Based on GMF Baseline
| Method | Frozen | | | Training | | | | |
|----------|------------------|------|------|--------------------|------|------|------|------|
| | A | V | AV | A(uni) | A(mul)| V(uni)| V(mul)| AV |
| Baseline | 23.31 | 25.14| 28.56| 23.31 | - | 25.14 | - | 28.56|
| GMF | 22.01 | 24.32| 31.64| 21.83 | 21.55| 23.93 | 23.67| 32.01|
| QMF | 15.71 | 18.45| 31.57| 22.76 | 12.83| 24.35 | 14.62| 33.49|
Finally, we compared QMF with GMF using our experimental benchmarks. The intra-modality classification loss in QMF enhances its feature information content, providing an advantage over GMF, which lacks this direct influence on the feature extractor. Moreover, QMF dynamically calculates modality weights, leading to better performance during the fusion stage. However, the dynamic weight accumulation causes a decrease in QMF's performance when modalities are missing.
[1] Provable Dynamic Fusion for Low-Quality Multimodal Data.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have checked the rebuttal and tend to keep my score.
---
Rebuttal 2:
Title: Any comments?
Comment: Dear qupC. The end of the discussion period is close. I would be grateful if you provide a feedback regarding authors’ answers to your review. | Summary: The paper introduces a Generalized Multimodal Fusion (GMF) method using the Poisson-Nernst-Planck (PNP) equation to address challenges in multimodal fusion, such as feature extraction efficacy, data integrity, feature dimension consistency, and adaptability across various downstream tasks. The GMF method leverages theoretical insights from information entropy and gradient flow to optimize multimodal tasks, treating features as charged particles and managing their movement through dissociation, concentration, and reconstruction.
Key contributions of the paper include:
1. A theoretical framework combining PNP and information entropy to analyze multimodal fusion.
2. A novel GMF method that dissociates features into modality-specific and modality-invariant subspaces.
3. Experimental results showing GMF achieves competitive performance with fewer parameters on multimodal tasks like image-video retrieval and audio-video classification.
Strengths: Originality: The application of the PNP equation from physics to multimodal feature fusion is novel and creative. The theoretical framework combining PNP and information entropy provides an original perspective on analyzing multimodal learning.
Quality: The paper provides a solid theoretical foundation with detailed proofs and derivations. The experimental evaluation is comprehensive, covering multiple datasets and task types (NMT, EMT, GMT).
Clarity: The paper is well-structured and clearly written. The methodology is explained step-by-step with helpful visualizations.
Significance: The proposed GMF method shows promising results in terms of performance, parameter efficiency, and robustness to missing modalities. It has potential for broad applicability as a frontend for other fusion methods.
Weaknesses: 1.The theoretical analysis, while extensive, could benefit from more intuitive explanations to improve accessibility.
2. The experimental section lacks ablation studies to isolate the impact of different components of the GMF method.
3. While the method shows good results, the performance improvements over some baselines are relatively small in certain experiments (e.g. Table 2).
4. The paper does not thoroughly discuss potential limitations of the approach or scenarios where it may not be suitable.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How sensitive is the GMF method to the choice of hyperparameters like the dissociation boundary b(j)? Was any systematic hyperparameter tuning performed?
2. The paper mentions GMF can serve as a frontend for other modules. Were any experiments conducted combining GMF with more advanced fusion methods beyond MBT and Perceiver?
3. How does the performance of GMF scale with increasing numbers of modalities beyond the audio-visual experiments presented?
4. Are there any theoretical limitations on the types of features or modalities that can be fused using the PNP-based approach?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors briefly mention some limitations of linear operations for high-dimensional inputs in the conclusion. However, a more thorough discussion of potential limitations and failure cases would strengthen the paper. Additionally, while not highly relevant for this theoretical/methodological work, some discussion of potential negative societal impacts of improved multimodal fusion techniques (e.g. privacy concerns) could be included.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your valuable comments and constructive feedback. Due to the word limit, please refer to rebuttal to all for some content. Here are our responses:
### 1. Sensitivity of Hyperparameter b(j)
| Method | Boundary | mAP@10 | mAP@20 | mAP@50 | mAP@100 | Params | FLOPs |
|-----------------------|----------------|--------|--------|--------|---------|-----------|--------|
| GMF (128) | 0.5 | 0.349 | 0.335 | 0.323 | 0.308 | 0.33M | 0.00G |
| GMF (128) | 0.125 | 0.333 | 0.317 | 0.301 | 0.289 | 0.33M | 0.00G |
| GMF (128) | 0.25 | 0.345 | 0.331 | 0.319 | 0.302 | 0.33M | 0.00G |
| GMF (128) | 0.75 | 0.351 | 0.335 | 0.322 | 0.309 | 0.33M | 0.00G |
| GMF (4096-128) | 0.5 | 0.355 | 0.341 | 0.327 | 0.315 | 119.21M | 0.23G |
| GMF (4096-128) | 0.25 | 0.354 | 0.339 | 0.323 | 0.309 | 135.46M | 0.27G |
| GMF (4096-128) | 0.75 | 0.349 | 0.333 | 0.315 | 0.298 | 102.94M | 0.20G |
| GMF (4096-128) | V=0.25, I=0.75 | 0.358 | 0.345 | 0.328 | 0.315 | 135.46M | 0.27G |
Performance degrades when $b(j)$ is insufficient or improperly set. Key principles for designing $b(j)$:
1. Dissociation dimensions of each subspace must exceed assumed dimensions.
2. For NMT tasks, allocate dissociation space to the modality-invariant subspace if the modality-specific subspace meets condition 1. If the feature extractor is learnable and $b(j) = 1$, GMF becomes a feature alignment method.
### 2. Ablation Study
We conducted three key ablation experiments:
| Module | A | V | AV |
|---------------------------|-------|-------|-------|
| Baseline | 23.31 | 25.14 | 28.56 |
| w/o dis-con | 10.72 | 10.94 | 13.83 |
| w/o dis-con, add map | 10.33 | 10.92 | 13.81 |
| w/o recon | 20.83 | 21.77 | 28.03 |
| GMF | 22.01 | 24.32 | 31.64 |
1. **Removing Dissolve-Concentrate Matrices:** Without these matrices, material conservation loss restores numerical values but not semantics, resulting in minimal generalization.
2. **Replacing with Identity Matrices:** Using identity matrices as intermediates led to a less defined learning objective, causing confusion and not altering non-homogeneous features.
3. **Removing Reconstruct Matrix and Conservation Loss:** Without these, there is no clear learning objective. The structure resembles a FFN layer, generating non-linear semantics. Additional feature mapping can only approach but not surpass the upper bound due to fixed input features. Lack of explicit feature separation impacts handling missing modalities.
These experiments highlight the critical role of each GMF matrix in ensuring effective feature manipulation and generalization. Theoretical proofs in Appendix D show significant performance loss from identity mapping, reinforcing the necessity of each GMF component.
### 3. Relationship Between Modality Expansion and GMF Performance
| Method | Modality | mAP@10 | mAP@20 | mAP@50 | mAP@100 | Params | FLOPs |
|--------------------|----------|--------|--------|--------|---------|----------|-------|
| GMF (128) | V-I | 0.349 | 0.335 | 0.323 | 0.308 | 0.33M | 0.00G |
| GMF (128) | V-I-T | 0.349 | 0.335 | 0.325 | 0.311 | 0.49M | 0.00G |
| GMF (4096-128) | V-I | 0.355 | 0.341 | 0.327 | 0.315 | 119.21M | 0.23G |
| GMF (4096-128-768) | V-I-T | 0.358 | 0.345 | 0.327 | 0.317 | 123.63M | 0.25G |
GMF (128) maps text and video features to the same dimension as image features, while GMF (4096-128-768) uses 4096-dimensional video, 128-dimensional image, and 768-dimensional text features directly. Low-dimensional text features introduce a linear increase in complexity, while high-dimensional text features do not bring significant overhead.
### Detailed Explanation of Table 2
Tables 3 and 4 show significant improvements. Table 2, representing a theoretical experiment, has less pronounced improvements:
1. **Fixed Feature Extractors (Frozen Extractor):** GMF shows least sensitivity to modality missing, except for UAVM. GMF reduces modality missing sensitivity, demonstrating the effectiveness of information entropy theory.
2. **Trainable Feature Extractors (Training Extractor):** GMF and UAVM results are closest to the Baseline, highlighting the necessity of gradient consistency and supporting the redefined optimization objective.
### Potential Limitation of GMF and PNP Equation
GMF requires both modality-invariant and modality-specific features, based on the PNP equation. Features from ActivityNet come from pre-trained models without homogenization, causing variability. Our method effectively separates necessary features, but NMT-focused models often extract only modality-invariant features, making the PNP equation less applicable. If any modality lacks specific features, the PNP equation theory may not fully apply, potentially causing performance decline.
### Experiment of Fusion Backend
Our comparison methods include the latest approaches, emphasizing broad theoretical integration at the fusion stage. Our literature research has not found other backend structures that are both uniquely structured and highly performant. We verify the validity of information entropy theory using MBT and Perceiver. Combining the GMF method with backend fusion requires:
1. The input must be a one-dimensional feature, aligning with most feature extractors' output.
2. The method must allow varying input lengths; otherwise, GMF loses its unique advantage.
On FakeAVCeleb, we designed a new GMF-based structure integrating the latest methods, demonstrating impressive performance. This new structure has shown significant potential in practical applications.
---
Rebuttal Comment 1.1:
Title: Acknowledging Clarifications and Additional Experiments on GMF Method
Comment: Thank you for your comprehensive rebuttal. I have carefully considered your responses and additional data. The sensitivity analysis for the dissociation boundary b(j) and the detailed ablation study significantly strengthen your methodology. Your clarifications on Table 2 results and the modality expansion experiments provide valuable context. I appreciate your candid discussion of potential limitations, particularly regarding the PNP equation's applicability. The information on fusion backend experiments, especially the new GMF-based structure for FakeAVCeleb, demonstrates promising practical applications.
Given these thorough responses and additional experiments, I believe your paper has been further strengthened. I maintain my "Weak Accept" rating, but with increased confidence in your contributions and technical soundness. I encourage you to incorporate key points from your rebuttal into the final paper, particularly the ablation studies and limitations discussion. This will provide readers with a more comprehensive understanding of your method's strengths and areas for future work.
---
Reply to Comment 1.1.1:
Title: Respond to the comments from reviewer wF9x.
Comment: Thank you for your valuable suggestions, which have provided us with great inspiration. We will add an appendix to include this content, which will also enhance and complete the relevant work. | Summary: In this paper, the authors combined the Poisson-Nernst-Planck (PNP) equation with information entropy theory and proposed a generalized multimodal fusion approach, which disassociates modality-specific and modality-invariant features, thereby reducing the join entropy of input features and meanwhile decreasing the downstream task-related information entropy. The experimental results demonstrate that the proposed approach can improve the generalization and robustness of multimodal tasks.
Strengths: - It is innovative to employ PNP for solving the multimodal fusion issue, by treating features as charged particles to disassociate them.
- A generalized multimodal fusion approach was designed, which can overcome the strong assumptions made by existing methods.
- Experiments showed the effectiveness of the proposed GMF approach in efficiency and flexibility.
Weaknesses: - Significance test (e.g., Wilcoxon signed-rank test) would be helpful to better illustrate the significance of the proposed method compared against baselines.
- Social impacts are not explicitly discussed in the paper/appendix.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Section 5.1, the authors mentioned “For NMTs, ActivityNet dataset evaluate (4)”. However, there is not (4) found in the above context.
- It is not quite how GMF works as frond-end when combing with other multimodal fusion approaches.
- According to Table 2, GMF works better when taken as back-end compared against front-end. Any possible explanations for these results?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Please refer to Weakness and Question, which are the aspects suggested to be further improved.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review and constructive comments. Because of word limit constraints, you can refer to a rebuttal to all for answers to some questions. Here are our responses, which we hope will address your concerns:
### 1. Labeling Errors in Section 5.1
Thank you very much for your thorough review. This is indeed an issue that significantly affects understanding but is difficult to detect through self-review. The reference to "evaluate (4)" in Section 5.1 is a labeling error. It should actually be "evaluate (1)." This mistake occurred because an auto-increment index was used during the final labeling adjustments without correction. We will rectify this error in the revised version.
### 2. Clarification on Front-End and Back-End Concepts
We apologize for any confusion caused by our previous explanation. To clarify, G-X represents method X as the backend, while GMF denotes simple concatenation as the backend. The frontend consistently employs feature rearrangement (see Appendix G.3, Figure 13). When used with other fusion methods, the backend always receives modality-specific features, while modality-invariant features are excluded from the computation. This approach is adopted because direct feature fusion not only reduces the distinguishability of the modality sources but also significantly increases computational cost.
Compared to the original X fusion method, G-X improves performance and reduces sensitivity to missing modalities. This improvement is due to the overall decrease in information entropy, as the backend method fuses only the modality-specific features, leaving the correct representation of modality-invariant features unaffected. Consequently, this ensures the usability of invariant features as a reference when a modality is missing, allowing modality-invariant features to still guide the process. At the same time, it guarantees the proper fusion of modality-specific features.
---
Rebuttal 2:
Title: Any comments?
Comment: Dear Jvpa. The end of the discussion period is close. I would be grateful if you provide a feedback regarding authors’ answers to your review.
---
Rebuttal Comment 2.1:
Comment: Thank you for the detailed clarification. My concerns have mostly been addressed through the author response.
I have no further questions at this time and tend to keep my current score. | Rebuttal 1:
Rebuttal: Thanks for the reviewers' constructive comments. For the common questions of some reviewers, we have uniformly answered:
## 1. Wilcoxon Signed-Rank Test
We propose two hypotheses:
1. H0: No significant difference between our method and the comparison algorithms.
2. H1: Significant difference between our method and the comparison algorithms.
### (a) Differences between GMF and Baselines
| | Baseline | AVoiD-DF | MISA | UAVM | DrFuse |
|-----------|----------|----------|------|------|--------|
| Statistic | 10.0 | 0.0 | 7.0 | 9.0 | 3.0 |
| P-Value | 1.0000 | 0.0078 | 0.1484 | 0.8438 | 0.0391 |
| Hypothesis| H0 | H1 | H0 | H0 | H1 |
We reject H0 for AVoiD-DF and DrFuse, indicating significant differences. For Baseline, MISA, and UAVM, we cannot reject H0 due to performance degradation under missing modality conditions. However, GMF shows significant improvement in modality fusion performance, enhancing by 3.08% and 5.49%. Compared to MISA, GMF shows performance loss under complete modality conditions but excels under missing modality conditions, it is opposite of UAVM. However, since GMF only faithfully reconstruct the features, it can't guarantee superior performance under both conditions compared to other methods. Thus, signed-rank testing cannot reflect significant differences in performance.
### (b) Differences between G-Methods and Original Methods
We also tested the combination of other methods with GMF. The results compare G-Perceiver with Perceiver and GMF, and G-MBT with MBT and GMF. G-Perceiver and G-MBT show significant differences compared to Perceiver and MBT, allowing us to reject H0, but not compared to GMF. This indicates our method enhances performance and imparts GMF characteristics: mitigating missing modalities and improving fusion performance.
| | Perceiver | GMF |
|-----------|-----------|------|
| Statistic | 1.0 | 12.0 |
| P-Value | 0.0016 | 0.4609 |
| Hypothesis| H1 | H0 |
| | MBT | GMF |
|-----------|-----|------|
| Statistic | 0.0 | 9.0 |
| P-Value | 0.0078 | 0.2500 |
| Hypothesis| H1 | H0 |
## 2. Error Analysis
Since the signed rank test can't directly reflect sex, we conducted an error analysis:
### 1. Modality Robustness (Table (2). column Frozen A, V; Training A, V; lower is better):
| | Baseline | AVoiD-DF | MISA | UAVM | DrFuse | MBT | Perceiver | GMF | G-MBT | G-Perceiver |
|----------|----------|----------|------|------|--------|-----|-----------|-----|-------|-------------|
| **MSE** | 0.00 | 72.47 | 19.56| 2.62 | 16.14 | 34.82 | 40.64 | 4.43 | 24.55 | 28.27 |
| **RMSE** | 0.00 | 8.51 | 4.42 | 1.62 | 4.02 | 5.90 | 6.37 | 2.10 | 4.95 | 5.32 |
GMF's proximity to the baseline is second only to UAVM, indicating lower sensitivity to modality missing.
### 2. Feature Informativeness (Table (2). column A (uni), V (uni); lower is better):
| | Baseline | AVoiD-DF | MISA | UAVM | DrFuse | MBT | Perceiver | GMF | G-MBT | G-Perceiver |
|----------|----------|----------|------|------|--------|-----|-----------|-----|-------|-------------|
| **MSE** | 0.00 | 11.25 | 2.69 | 0.14 | 2.88 | 4.88 | 0.12 | 0.07 | 2.65 | 0.11 |
| **RMSE** | 0.00 | 3.35 | 1.64 | 0.37 | 1.70 | 2.21 | 0.34 | 0.26 | 1.63 | 0.33 |
GMF is closest to the baseline, indicating the highest amount of effective information in unimodal features trained together with GMF.
### 3. Modality Fusion Capability (Table (2). column Frozen AV; Training AV; higher is better):
| | Baseline | AVoiD-DF | MISA | UAVM | DrFuse | MBT | Perceiver | GMF | G-MBT | G-Perceiver |
|---------------------|----------|----------|------|------|--------|------|-----------|------|-------|-------------|
| **Performance Index** | 0.00 | 5.76 | 22.77| 5.44 | 19.85 | 20.04 | 33.87 | 10.69 | 36.90 | 45.61 |
| **Accuracy Score** | 0.00 | 2.40 | 4.77 | 2.33 | 4.46 | 4.48 | 5.82 | 3.27 | 6.07 | 6.75 |
Here, the Performance Index and Accuracy Score are the metrics we defined. The expressions are:
$\text{Performance Index} = \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2 $
$\text{Accuracy Score} = \sqrt{\frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2} $
G-Method shows significantly improved fusion capabilities, with G-Perceiver and G-MBT being the most notable. This highlights GMF's effective rearrangement capability as a fusion front-end.
In conclusion, Table 2 shows GMF requires minimal computational cost and performs well under both conditions. On average, GMF performs better than the comparison methods.
## 2. Ethical Considerations
Our paper strictly adheres to NeurIPS ethical guidelines, using only open-source datasets and ensuring no involvement of human subjects. This compliance meets ethical standards and avoids data privacy and consent issues, as detailed in our checklist. Considerations include:
**(a) Positive Impact:**
The versatility of the GMF method allows it to be applied across various fields.:
- **Medical Imaging**: Enhances diagnostic accuracy and efficiency.
- **Autonomous Driving**: Improves environmental perception and decision-making.
- **Multimedia Retrieval**: Increases relevance and accuracy of search results.
These advancements benefit healthcare, traffic safety, and information technology accessibility.
**(b) Negative Impact:**
The feature separation capability of GMF can enhance generative models, enabling them to produce highly detailed intra-modal features and high inter-modal synchrony, which could inadvertently facilitate the creation of deepfakes. To mitigate risks, we restrict GMF's commercial use through agreements and recommend safety measures for future researchers, such as limiting model releases, developing usage guidelines, and establishing monitoring mechanisms. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Continual Learning in the Frequency Domain | Accept (poster) | Summary: Inspired by the human visual system (HVS), this paper proposes a new framework called Continual Learning in the Frequency Domain (CLFD) for edge devices. For the input features of the feature extractor, CLFD employs wavelet transforms to map the original input image to the frequency domain, thereby reducing the size of the input feature map. In experiments on two public datasets, the performance of the proposed and conventional methods is discussed in terms of both accuracy and learning efficiency.
Strengths: - Continuous learning in edge devices is a significant study from a practical point of view.
- The proposed method is simple and effective under limited conditions.
- A minimal survey of previous research is provided.
Weaknesses: Throughout, the explanation of the proposed method needs to be more comprehensive. Experiments also need to be more comprehensive to demonstrate the effectiveness of the proposed method. Specifically, the paper has the following rooms for improvement.
- In line 60, there needs to be a clear explanation of why using frequency space is adequate. While it is interesting to get inspiration from HVS, there is no apparent reason why it is a means to the challenge of the proposed method (continuous learning on edge devices). In other words, the introduction needs a more logical structure.
- In Figure 2, the meaning of the symbols (e.g., \otimes, etc.) is unclear, so a specific explanation is needed. The clarity of the figure needs to be improved.
- In the description of the method, there is the following statement: "Considering that tasks are predominantly sensitive to specific frequency domain features extracted by a feature extractor, different To this end, we propose the CFFS, designed to manage the issue of overlap in frequency domain features among samples from different classes." However, no results from the analysis support this issue (fact). Furthermore, it needs to state why CFFS is the idea to solve this fact. Therefore, the design of the proposed method needs to be more convincing.
- Many grammatical errors need to be corrected. A comma or period is needed after the formula. (e.g., line 210: Where->where)
- The method's design throughout is very ad hoc and heuristic. For example, in Equation 4, no clear reason is given for the algorithm's design.
- Only the overall ACC represented by Equation 6 is evaluated in the experiment. In continuous learning, there are other evaluation measures (e.g., Forgetting measure). In addition, task-specific ACCs and other measures need to be evaluated. "[Chaudhry et al., 2018b] Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with a-gem. In Proc. ICLR, 2018."
- In Table 1, there are no experiments for larger Buffer numbers. For example, previous studies (CLS-ER and SparCL-ER) have more buffers in their paper. Why are there no comparisons for larger numbers of buffers? It is unfair from the point of view of experiments in academic papers to publish only comparisons of conditions in which the proposed method is superior. (For example, edge devices that will be satisfied with larger buffer sizes may be developed.)
- Furthermore, it is difficult to judge the effectiveness of the proposed method since the experiments do not evaluate larger datasets; in this paper, only smaller datasets such as CIFAR and TinyImageNet are used.
- When the number of tasks increases, the proposed method will likely perform poorly for the latter tasks.
Technical Quality: 2
Clarity: 2
Questions for Authors: The paper and the proposed method need both clarity and experimentation.
In particular, the proposed method's effectiveness is difficult to determine because of its ad hoc design and insufficient explanation.
- Please explain more about the necessity of utilizing frequency space for continuous learning for edge devices.
- Furthermore, I'm curious about the absence of experiments with larger Buffer numbers in Table 1. I think edge devices that will be satisfied with larger buffer sizes may be developed in the future.
- The overall ACC, represented by Equation 6, is the only evaluation. Other evaluation measures (e.g., forgetting measures) also exist in continuous learning. In addition, task-specific ACCs, etc., need to be evaluated. Please explain why you did not use these evaluation measures.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: - For large data sets, the proposed method may need to be revised.
- When the number of buffers is large, the proposed method may be inferior to the conventional method.
- When the number of tasks increases, the proposed method is likely not to perform well for the latter tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Firstly, we extend our gratitude for the time and attention devoted to reviewing our paper. Below, we have carefully addressed each of your concerns to the best of our knowledge to improve the overall contribution of the paper.
# The necessity of utilizing frequency space for continuous learning for edge devices.
Thank you for this great question. Edge devices are constrained by limited memory and computing resources, underscoring the significance of training efficiency and peak memory usage in continual learning. Presently, continual learning primarily operates in the spatial domain, where the filters in a CNN are generally smooth due to the local pixel smoothness in natural images, resulting in spatial redundancies that impact the efficiency and memory usage of continual learning. Conversely, in the frequency domain, it is possible to train on images using distinct frequency domain features, significantly reducing redundancy in the learning process and improving the performance of continual learning methods.
# The symbols in Figure 2 is unclear.
Sorry for the unclear expression. $\otimes$ represent this feature being masked. In the final version of the paper, we will improve the clarity of the figure.
# The design of the CFFS needs to be more convincing.
In a previous study [1], it was found that continual learning has task-sensitive parameters for different tasks, and reducing the overlap of these parameters can significantly improve the performance of continual learning. Similarly, in the frequency domain, we observed that it also has task-sensitive parameters. As shown in Table 3, when we pruned 75% of the weights, the performance of the CLFD method improved significantly. Considering the presence of task-sensitive parameters in the classifier, which causes different tasks to be sensitive to specific frequency domain features, we propose CFFS to balance the reusability and interference of frequency domain features.
[1] Sarfraz, Fahad, Elahe Arani, and Bahram Zonooz. "Sparse coding in a dual memory system for lifelong learning." AAAI. 2023.
# Grammatical errors need to be corrected.
We appreciate the reviewer's feedback regarding the grammatical errors. We will update the paper according to your suggestions in the final revision.
# The method's design throughout is very ad hoc and heuristic.
We agree with the reviewer that the method's design is very heuristic. However, the design of our method is well-founded. Heterogeneous Dropout has already been demonstrated to be effective in continual learning within the spatial domain [1,2]. Building on this foundation, we propose Frequency Dropout (Eq. 4). Additionally, in this work, we emphasize that our primary focus is on enhancing the efficiency of continual learning training by leveraging the frequency domain, thereby promoting its application on edge devices. Our research is among the first in this area, and we hope it will inspire the continual learning research community to further explore the frequency domain.
[1] Sarfraz, Fahad, Elahe Arani, and Bahram Zonooz. "Sparse coding in a dual memory system for lifelong learning." AAAI. 2023.
[2] Vijayan, Preetha et al. “TriRE: A Multi-Mechanism Learning Paradigm for Continual Knowledge Retention and Promotion.” NeurIPS. 2023.
# Other evaluation measures.
In Appendix Table A2, we provide the forgetting measure, while Figure 7 presents the task-specific ACCs, and Figure 8 illustrates the stability and plasticity of our framework. We will incorporate the key experimental results into the main text.
# There are no experiments for larger Buffer numbers.
We agree with the reviewer that conducting experiments on larger buffer sizes is essential. We conduct re-experiments under the buffer settings of SparCL-ER [1] and provide a detailed discussion in the general response.
[1] Wang, Zifeng, et al. "Sparcl: Sparse continual learning on the edge." NeurIPS. 2022.
# The experiments do not evaluate larger datasets.
We agree with the reviewer's suggestion that incorporating larger datasets can enhance the credibility of the results. Therefore, we introduced the Split Imagenet-R dataset as an alternative and provided a detailed discussion in the general response.
# The proposed method will likely perform poorly when the number of tasks increases.
We sincerely thank the reviewers for raising this concern. On the Split Tiny ImageNet and Split ImageNet-R datasets, each comprising 10 tasks, our framework consistently demonstrates strong performance, further validating the effectiveness of our framework. We emphasize that although our accuracy remains high even with a large number of classes, the principal contribution of our work lies in introducing the concept of continual learning in the frequency domain, which significantly improves training efficiency and facilitates the application of continual learning on edge devices.
We once again thank the reviewer for providing detailed feedback. We have made an utmost effort to resolve all the concerns raised. Please let us know in case we have missed something.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer KM8C
Comment: Thank you very much for your kind feedback. I deeply appreciate very thoughtful feedback. However, I still have a few concerns about the following;
1) Scalability.:
In the rebuttal, we discuss the validity of increasing the number of tasks with results of ten tasks. However, as noted in the paper below, in my understanding, the performance is also discussed with a larger number of tasks, e.g., 50 or 20 tasks, in a replay-based manner.
- [i] Rebuffi, Sylvestre-Alvise, et al. "icarl: Incremental classifier and representation learning." in Proc. CVPR 2017. (https://arxiv.org/pdf/1611.07725)
- [ii] Wu, Yue, et al. "Large scale incremental learning." in Proc.CVPR. 2019.(https://openaccess.thecvf.com/content_CVPR_2019/papers/Wu_Large_Scale_Incremental_Learning_CVPR_2019_paper.pdf)
2) Positioning.:
It is unclear whether the proposed method is "continual learning with the frequency domain (only) for edge devices" or "continual learning with the frequency domain (including for edge devices)". I am a bit confused about whether it is the former or the latter, as the explanation is inconsistent throughout the paper text, title, and rebuttal.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer KM8C
Comment: We thank the reviewer for taking the time to review our rebuttal. Below, we provide further clarification on the concerns you have raised.
# Scalability.
To validate the effectiveness of CLFD in addressing scalability issues, we divide the Split ImageNet-R dataset into 20 tasks and evaluate the performance of both ER-ACE and CLFD-ER-ACE. Additionally, we evaluate the average Class-IL accuracy after the completion of each task as the number of tasks increased from 10 to 20. The results are as follows:
| Methods | Class-IL Accuracy (%) | Task 10 | Task 11 | Task 12 | Task 13 | Task 14 | Task 15 | Task 16 | Task 17 | Task 18 | Task 19 | Task 20 |
|-------------------|-----------------------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|
| ER-ACE | 10.63 | 15.38 | 15.37 | 15.26 | 12.75 | 11.60 | 12.22 | 8.63 | 7.48 | 10.07 | 10.00 | 10.63 |
| CLFD-ER-ACE | 13.44 | 15.93 | 16.43 | 17.20 | 13.59 | 15.42 | 13.08 | 9.75 | 14.69 | 13.26 | 12.79 | 13.44 |
The results illustrate that our framework consistently enhances the performance of continual learning methods on the intricate Split ImageNet-R dataset, regardless of whether the dataset is divided into 10 tasks or 20 tasks. Furthermore, at any task boundary between 10 and 20, the accuracy of our framework consistently outperforms the baseline method. This indicates that our framework demonstrates strong scalability. We hope to provide results for additional methods before the end of the discussion period; however, time constraints and limited computational resources are significant challenges. In any case, we will incorporate these results into our final revision.
# Positioning.
We apologize for any confusion. Our framework is “continual learning with the frequency domain (including for edge devices).” The purpose of this framework is to leverage the frequency domain to enhance the performance and training efficiency of continual learning methods. Even when training in the cloud, our framework significantly reduces training time while improving accuracy. On edge devices, the advantages of our framework are even more pronounced due to the stricter memory constraints. Our framework can greatly increase training speed and reduce peak memory usage without requiring any additional optimization, an outcome that previous work has not achieved, thereby facilitating the deployment of continual learning on edge devices. We will revise the corresponding statements to avoid any potential confusion.
We once again express our gratitude to the reviewer for the valuable feedback. Please let us know if any concerns persist. We would be more than willing to provide additional information to ensure a thorough understanding of our work. | Summary: Based on the research that human visual system (HVS) exhibits varying sensitivities to different frequency components, this paper proposes to do continual learning in the wavelet frequency domain to reduce the size of inputs. The proposed CLFD module includes feature extractor and feature encoder, where the feature encoder generate low-frequency features, global features and high-frequency features, and the feature extractor selects class-specific frequency features. The generated features are used to do continual learning.
Strengths: 1. This paper introduces the wavelet frequency domain features into CL. By encoding the low-frequency features and high-frequency features respectively, the proposed CLFD may have potential to mimic the human visual system for better learning results.
2. The feature extractor considers the class information of the frequency features to help the process of CL.
Weaknesses: 1. It is better to include larger databases such as imagenet-1k into the experiments to make the results more convincing.
2. Table-1 leaves some unclear items. For instance, the meaning of class-IL (class incremental learning) and task-IL (task incremental learning) should be explained. The meaning of shadowed lines and bold texts in the table should be included.
3. The classification performance of the proposed method seems have a large gap with the state-of-the-art methods on both cifar-10 and tiny-imagenet.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. How about the time-consuming of the processes of wavelet transform and the forward and backward of CLFD module?
2. Please clearly explain table-1 for better understanding.
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: 1. The authors may consider to do some discuss about the different frequency domains such as fourier domain and discrete cosine domains.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We extend our sincere gratitude for your thorough review of our paper. Below, we have diligently addressed each of your concerns to the best of our understanding, aiming to enhance the paper's overall contribution.
# It is better to include larger databases.
We agree with the reviewer that introducing larger databases can make the results more convincing. Therefore, we introduced the Split Imagenet-R dataset as an alternative and provided a detailed discussion in the general response.
# Table-1 leaves some unclear items.
We appreciate the reviewer's feedback regarding the shortcomings in Table 1. The highest results are marked in bold, and shadowed lines indicate the results from our framework. We introduced the definitions of Class-IL and Task-IL on line 147. In the simplest testing configuration, we assume that the identity of each upcoming test instance is known, a scenario defined as Task Incremental Learning (Task-IL). If the class subset of each sample remains unidentified during CL inference, the situation escalates to a more complex Class Incremental Learning (Class-IL) setting. We will update Figure 1 in the final version and provide definitions for Class-IL and Task-IL in the experiment.
# The classification performance of the proposed method seems have a large gap with the state-of-the-art methods.
The reason for the low classification performance is that we only use the polar buffer size. When we adjust the buffer size to be consistent with state-of-the-art methods, the classification performance significantly improves, and our method continues to enhance the performance of various rehearsal-based methods. We provide a detailed analysis in the general response.
# The time-consuming of the processes of wavelet transform and CLFD module.
We appreciate the reviewers for identifying the lack of training time analysis. We calculate the training time for the wavelet transform, CLFD module, ER method and CLFD-ER method throughout the entire Split CIFAR-10 training process. The wavelet transform accounts for only 1.5% of the ER training time, while the CLFD module accounts for only 13.1% of the ER training time. It is worth noting that when we integrated the CLFD module with the ER method, the training time was reduced by 2.3$\times$. This demonstrates the efficiency of our framework.
| Methods | Training Time (s) |
|---------|---------------------------------|
| wavelet transform | 127 |
| CLFD | 1125 |
| ER | 8542 |
| CLFD-ER | 3732 |
# Some discuss about the different frequency domains.
We discussed different frequency domains in sections 2.2 and 3.2. The discrete cosine transform and discrete Fourier transform both cause a complete loss of spatial information in images, making it difficult to utilize data augmentation techniques. Therefore, they are challenging to apply in continual learning.
Once again, we express our gratitude for your thoughtful evaluation and consideration. We have conscientiously endeavored to address each of the concerns you raised, and we are dedicated to ensuring that our paper makes a meaningful contribution to the conference proceedings.
---
Rebuttal 2:
Comment: I thank the authors for answering my comments.
I believe that the authors' responses can solve most of my concerns of this paper, and I would like to re-rate this paper to weak-accept. | Summary: In this paper, the authors proposed a novel replay-based continual learning, which is named continual learning in the frequency domain (CLFD). The framework consists of two main modules, frequency domain feature encoder (FFE) and class-aware frequency domain feature selection (CFFS). FFE utilizes discrete wavelet transform (DWT) to transform the RGB images into a frequency domain. CFFS computes similarity and selects suitable frequency domain features for classification.
Strengths: The strengths of this paper are listed below:
- The paper proposed a novel method that utilizes the frequency domain to decode the information of inputs.
- It can reduce the storage requirement to store a sample, which leads to less memory or more stored samples.
- They ran many experiments to show the performance.
Weaknesses: The weaknesses of this paper are listed below:
- Some parts are not presented clearly. More details may included. Including a pseudocode may help.
- Some notations are not explained clearly, e.g. sec. 3.4.
- Even though there are many experiments, more ablation studies about hyperparameters may provide more information.
Technical Quality: 3
Clarity: 2
Questions for Authors: How is the historical data stored? What format is the data in a memory buffer? How do you use it for replay?
Could you please give some more details about CFFS? The overall idea can be understood, but the details are unclear because of confusing notations.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: As the authors have discussed, replay-based methods are unsuitable for all scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We express our gratitude to the reviewer for dedicating time to thoroughly review our work. We value the positive feedback provided on our manuscript. Below, we address the weaknesses and queries raised:
# More ablation studies about hyperparameters.
We appreciate the feedback from the reviewers and acknowledge the importance of analyzing the robustness of hyperparameter selection. However, our framework can integrate with various continual learning methods, ablation studies on hyperparameters require substantial computational resources and extensive experimentation. Given this, we have demonstrated the effectiveness of our framework by using default hyperparameters for all datasets, methods, and buffer sizes.
# The ways to replay data.
We use the widely accepted continual learning repository Mammoth CL [1] for data replay. We adopt a reservoir sampling strategy [2] to save samples, storing all data in tensor format. During each batch’s training, we read an additional batch of samples from the memory buffer and train them together with the current batch.
[1] Buzzega, Pietro, et al. "Dark experience for general continual learning: a strong, simple baseline." NeurIPS. 2020.
[2] Vitter, Jeffrey S. "Random sampling with a reservoir." *ACM Transactions on Mathematical Software (TOMS)* 11.1 (1985): 37-57.
# More details about CFFS.
We apologize for any confusion. In the final version of the paper, we will optimize the details of CFFS. For a given task, we select only 60% of frequency domain features for classification. Each frequency domain feature is monitored for its selection frequency during training. Essentially, for each class, each frequency domain feature is assigned a selection counter $\mathcal{F}$ that increments when the feature's value is among the top 60%. Before selecting frequency domain features, each feature is assigned a dropout probability. During the initial epoch of each task training, we apply frequency dropout. In the later epochs of training, we use semantic dropout and update its probability after each epoch (Eq. 5). After completing the first task training, for subsequent tasks, we calculate the similarity of the classes (Eq. 2) and update the frequency dropout probability (Eq. 4) at the end of the first epoch. The pseudocode is provided in the attached PDF.
We hope that the clarification provided has resolved your concerns and inquiries. Should you require further assistance or elaboration, we are willing to provide additional information to ensure a comprehensive understanding of our work.
---
Rebuttal 2:
Title: More ablation studies about hyperparameters
Comment: We conduct supplementary ablation experiments to examine the influence of different feature selection proportions on model performance. Our analysis concentrated on two methods: CLFD-ER and CLFD-ER-ACE. Figure 8 in the appendix illustrates that ER exhibits the highest degree of plasticity, whereas ER-ACE demonstrates the greatest stability. We conduct tests on the S-CIFAR-10 dataset with a buffer size of 50. The accuracy results under the Task-IL setting are as follows:
| Feature Selection Proportions | 10% | 20% | 30% | 40% | 50% | 60% | 70% | 80% | 90% |
|-------------------------------|------|------|------|------|------|------|------|------|------|
| CLFD-ER | 85.67 | 85.47 | 84.91 | 84.89 | 83.88 | 84.45 | 84.49 | 84.99 | 87.97 |
| CLFD-ER-ACE | 85.06 | 85.86 | 86.74 | 86.54 | 87.05 | 87.13 | 87.30 | 88.12 | 89.83 |
And the accuracy results under the Class-IL setting are as follows:
| Feature Selection Proportions | 10% | 20% | 30% | 40% | 50% | 60% | 70% | 80% | 90% |
|-------------------------------|------|------|------|------|------|------|------|------|------|
| CLFD-ER | 51.03 | 49.97 | 48.91 | 46.67 | 45.69 | 45.56 | 43.88 | 41.79 | 39.58 |
| CLFD-ER-ACE | 50.37 | 50.50 | 50.64 | 50.87 | 52.12 | 52.74 | 52.84 | 53.97 | 54.20 |
We also conduct an ablation study on the other hyperparameters presented in Section E.2. Since our framework can be integrated with various continual learning methods, we do not use grid search; instead, we individually investigate the impact of each hyperparameter on continual learning performance. We focus on the CLFD-ER-ACE method for this study, as it performs well across all datasets and does not introduce any additional hyperparameters. We conduct tests on the S-CIFAR-10 dataset with a buffer size of 50. The accuracy results under the Class-IL setting are as follows:
| $\beta_c$ in Eq. 5 | 0.5 | 1 | 1.5 | 2 | 2.5 | 3 | 3.5 | 4 | 4.5 | 5 |
|-------------------------------|-----|---|-----|---|-----|---|-----|---|-----|---|
| CLFD-ER-ACE | 50.70 | 50.19 | 51.47 | 52.74 | 50.29 | 52.48 | 51.65 | 51.41 | 50.73 | 52.36 |
| $\lambda$ in Eq. 4 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 |
|-------------------------------|-----|-----|-----|-----|-----|-----|-----|-----|-----|
| CLFD-ER-ACE | 50.85 | 52.08 | 51.24 | 51.68 | 52.74 | 51.77 | 50.81 | 50.68 | 51.81 |
| $\mathcal{E}$ in CFFS | 5 | 10 | 15 | 20 | 25 | 30 | 35 | 40 | 45 |
|---------------------------------|----|-----|-----|-----|-----|-----|-----|-----|-----|
| CLFD-ER-ACE | 52.06 | 52.27 | 52.16 | 52.74 | 52.43 | 52.18 | 52.65 | 52.64 | 52.01 |
Our choice of hyperparameters in the paper yields optimal performance, and our framework demonstrates robustness to these hyperparameter selections. We hope that these ablation studies will provide the reviewer a more thorough comprehension of our work. We extend our appreciation for the feedback provided. Should there be any aspects requiring further elucidation, we are prepared to offer further explanations.
---
Rebuttal Comment 2.1:
Comment: Thank you for your response. I will keep the rating. | Summary: This paper introduces a novel framework designed to enhance the efficiency and effectiveness of continual learning (CL) systems by leveraging frequency domain representations, inspired by the human visual system's varying sensitivity to different frequency components. This approach aims to address the limitations of existing rehearsal-based methods in CL, particularly under constraints like limited resources on edge devices. The framework, named Continual Learning in the Frequency Domain (CLFD), uses a wavelet transform to convert input images into frequency domain representations, optimizing both the input and output features of the feature extractor. This allows for better management of memory usage and computational demands, leading to improvements in both accuracy and training efficiency. Extensive experiments demonstrate that CLFD can enhance the performance of state-of-the-art rehearsal-based methods, achieving higher accuracy and reduced training times.
Strengths: 1. The proposed method is inspired by the human visual system, which is sensitive to different frequency components and efficiently reduces visually redundant information. This represents a novel shift from traditional spatial domain methods to frequency domain methods in CL, suggesting a significant departure from established methods.
2. CLFD significantly enhances the performance and training efficiency of continual learning systems on edge devices, achieving up to 6.83% higher accuracy and reducing training time by 2.6 times.
3. The paper is structured to clearly present the problem of catastrophic forgetting in CL and how the proposed method addresses it by reducing the input feature map size and optimizing feature reuse. The methodology is described in detail, providing clarity on how the approach works and is implemented.
4. By reducing training time and memory usage while improving accuracy, the framework can integrate with existing rehearsal-based methods without extensive modification, which underscores its practical significance and potential impact on the field.
Weaknesses: 1. While the framework shows promising results in specific settings and datasets (like Split CIFAR-10 and Split Tiny ImageNet), the paper does not thoroughly discuss its performance across a wider range of scenarios or more complex datasets.
2. The effectiveness of the proposed method is somewhat dependent on the buffer size used for rehearsal in continual learning. The reliance on buffer size might limit its utility in extremely constrained environments where memory is severely limited.
3. Freezing the FFE based on the first task might introduce scalability issues, as the encoder might not efficiently handle the complexity introduced by a broader set of tasks or more diverse data.
4. In Table 1, it is observed that under the Task Incremental Learning (Task-IL) setting, the integration of the CLFD framework leads to worsened performance in some experimental results. This highlights a potential weakness, as the paper does not sufficiently explain why the framework underperforms in these specific scenarios.
5. The paper exhibits several technical and typographical issues that affects its formal presentation and readability. For example, the multiplication symbol in section 3.3 is represented as “x” instead of using a standard “$\times$”. In Eq. 2, symbols are not bolded. Figure 4’s caption lacks a period at the end. Other typos like writing “FFE” as “FFD” in the conclusion section, introduces confusion and can be misleading about key terms and components described in the paper. In addition, there are errors in the formulation of some equations. For instance, Eq. 7’s subscript should start from 1 instead of 0. Similarly, Eq. 8 is also incorrectly formulated.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to the weaknesses.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The limitations have been discussed in Appendix A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We express our sincere gratitude to the reviewer for offering thoughtful feedback and providing a constructive evaluation of our work. The valuable input has significantly contributed to the improvement of our paper.
# The paper does not thoroughly discuss its performance across a broader range of scenarios or more complex datasets.
We agree with the reviewer that conducting experiments on a broader dataset is essential. Accordingly, we have supplemented our experiments with the Split Imagenet-R dataset and provided a detailed discussion in the general response.
# The reliance on buffer size might limit its utility in extremely constrained environments where memory is severely limited.
While the dependence on buffers does consume some storage resources, our method can achieve excellent performance even with smaller buffer sizes by preserving the frequency domain features of the image instead of the original image, as shown in Table 1. In contrast, many continual learning methods [1,2] that do not rely on buffers need extra parameters, such as teacher models, to prevent model forgetting, which use significantly more storage than rehearsal-based methods.
More importantly, for continual learning on resource-constrained devices, the limitation of memory resources is much more critical than that of storage resources. This necessitates continual learning methods to have lower peak memory usage. As shown in Table 1, our method can achieve up to a 3.0 $\times$ reduction in peak memory usage. This effectively promotes the application of continual learning methods on resource-constrained devices.
[1] Smith, James, et al. "Always be dreaming: A new approach for data-free class-incremental learning." *ICCV*. 2021.
[2] Li, Zhizhong, and Derek Hoiem. "Learning without forgetting." *IEEE transactions on pattern analysis and machine intelligence* 40.12 (2017): 2935-2947.
# Freezing the FFE based on the first task might introduce scalability issues.
We agree with the reviewer that freezing the FFE does affect the plasticity of the model. However, not freezing the FFE leads to severe forgetting problems. To balance the plasticity and stability of the model, it is crucial to ensure that no cross-task learnable parameters are introduced. As shown in Figure 8 in the appendix, our method consistently improves the balance between plasticity and stability in continual learning methods.
# The framework underperforms in the Task-IL setting.
We appreciate the reviewers for identifying the lack of the Task-IL accuracy analysis. This result can be explained from two aspects: (1) FFE encodes the frequency domain features of the input image, reducing the size of the input feature map to one-fourth of its original size. This leads to fewer learnable features and increases the difficulty of classifying classes within the task. (2) To minimize the overlap of frequency domain features between inputs with different semantics across tasks, CFFS selects only a subset of frequency domain features for classification. This selection also increases the difficulty of classifying classes within tasks. However, as the task difficulty increases, the performance of our framework becomes more pronounced in the Task-IL setting. For instance, as shown in Table 1 in the attached PDF, our framework significantly enhances Task-IL accuracy on the Split Tiny ImageNet dataset with buffer sizes of 200 and 500.
Considering that Task-IL is generally regarded as an easier CL scenario compared to Class-IL, this research, consistent with previous work [1], primarily focuses on the more intricate Class-IL setting (as mentioned in line 150). Furthermore, we emphasize that the novelty of this work lies not only in improving accuracy but also in significantly enhancing the training efficiency of continual learning. Additionally, we have validated our framework on edge devices, thereby promoting the application of continual learning in such environments.
[1] Wang, Zifeng, et al. "Sparcl: Sparse continual learning on the edge." NeurIPS. 2022.
# The technical and typographical issues.
Thanks for pointing out these technical and typographical issues, we will revise them in the next release.
We once again thank the reviewer for providing detailed and insightful feedback. Please let us know if there are any open points that we may have overlooked.
---
Rebuttal Comment 1.1:
Comment: Overall, I am satisfied with the responses provided to the first and second weaknesses outlined in my initial review.
However, regarding the third weakness, I noticed that there still lacks experimental evidence specifically validating that CLFD can effectively overcome scalability issues. This echoes the last weakness highlighted by Reviewer KM8C.
Additionally, I am particularly concerned about the performance degradation observed in the results for the S-CIFAR-10 dataset under the Task-IL setting when CLFD is integrated. Considering that Task-IL is generally regarded as a simpler scenario compared to Class-IL, **the inferior performance in an easier setting raises doubts about the overall efficacy of the CLFD framework**. Moreover, when compared to SparCL [1], SCoMMER [2] and TriRE [3], the CLFD results still show an obvious gap. This discrepancy further accentuates my concerns regarding the effectiveness and robustness of CLFD.
After considering the responses to the weaknesses I outlined and feedback from other reviewers, I maintain my original rating with a score of 4: borderline reject.
[1] Wang, Zifeng, et al. "Sparcl: Sparse continual learning on the edge." NeurIPS. 2022.
[2] Sarfraz, Fahad, Elahe Arani, and Bahram Zonooz. "Sparse coding in a dual memory system for lifelong learning." AAAI. 2023.
[3] Vijayan, Preetha et al. “TriRE: A Multi-Mechanism Learning Paradigm for Continual Knowledge Retention and Promotion.” NeurIPS. 2023.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer wmJG
Comment: We thank the reviewer for swift response. Based on your suggestion, we conducted additional experiments.
# CLFD can effectively overcome scalability issues.
To validate the effectiveness of CLFD in overcoming scalability challenges, we divide the Split ImageNet-R dataset into 20 tasks and evaluate the performance of both ER-ACE and CLFD-ER-ACE. Additionally, we evaluate the average Class-IL accuracy after the completion of each task as the number of tasks increased from 10 to 20. The results are as follows:
| Methods | Class-IL Accuracy (%) | Task 10 | Task 11 | Task 12 | Task 13 | Task 14 | Task 15 | Task 16 | Task 17 | Task 18 | Task 19 | Task 20 |
|-------------------|-----------------------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|
| ER-ACE | 10.63 | 15.38 | 15.37 | 15.26 | 12.75 | 11.60 | 12.22 | 8.63 | 7.48 | 10.07 | 10.00 | 10.63 |
| CLFD-ER-ACE | 13.44 | 15.93 | 16.43 | 17.20 | 13.59 | 15.42 | 13.08 | 9.75 | 14.69 | 13.26 | 12.79 | 13.44 |
The results demonstrate that, on the complex Split ImageNet-R dataset, our framework consistently enhances the performance of continual learning methods, whether the dataset is divided into 10 tasks or 20 tasks. This indicates that our framework exhibits good scalability. We hope to provide results for additional methods before the end of the discussion period; however, time constraints and limited computational resources are significant challenges. In any case, we will incorporate these results into our final revision.
# The performance of CLFD on the S-CIFAR-10 dataset under the Task-IL setting.
Since we primarily focus on performance under the Class-IL setting, we select only 60% of the frequency domain features for classification in CFFS. When we select 90% of the frequency domain features, the accuracy results under the Task-IL setting with a buffer size of 50 are as follows:
| Methods | Task-IL Accuracy (%) |
|-------------------|-------------------|
| ER | 86.36 |
| DER++ | 83.51 |
| ER-ACE | 85.78 |
| CLS-ER | 89.71 |
| CLFD-ER | 87.97 |
| CLFD-DER++ | 83.91 |
| CLFD-ER-ACE | 89.83 |
| CLFD-CLS-ER | 90.74 |
The results indicate that our framework consistently improves accuracy under the Task-IL setting, which is also validated in Table 2 of the ablation study. When tested on the simpler S-CIFAR-10 dataset, removing the CFFS module enhances accuracy under the Task-IL setting, consistent with our previous explanations regarding CFFS. By adjusting the CFFS module, our framework is able to consistently improve accuracy under both Class-IL and Task-IL settings across all datasets.
# Comparison with SCoMMER [1] and TriRE [2].
Given that our framework efficiently integrates with various continual learning methods, we conduct integration tests of CLFD with SCoMMER [1] and TriRE [2]. We strictly maintain all hyperparameters and experimental settings consistent with those in the original papers. Using the Mammoth CL repository [3], we perform the tests on the S-CIFAR-10 dataset with a buffer size of 200 to ensure a fair comparison. The results are as follows:
| Methods | Class-IL Accuracy (%) |
|-------------------|-------------------|
| SCoMMER | 61.34 |
| TriRE | 56.18 |
| CLFD-SCoMMER | 63.69 |
| CLFD-TriRE | 60.03 |
The experimental results demonstrate that our framework consistently enhances the accuracy and training efficiency of both SCoMMER and TriRE. The observed variations in accuracy are attributed solely to the adjustments made to the data augmentation technique within the Mammoth CL repository, as a unified experimental setup is utilized. The original data augmentation technique from the repository cannot be directly applied to the frequency domain. Hence, we implement a simpler data augmentation technique, which is detailed in Sections E.2 and E.3 of the appendix.
[1] Sarfraz, Fahad, Elahe Arani, and Bahram Zonooz. "Sparse coding in a dual memory system for lifelong learning." AAAI. 2023.
[2] Vijayan, Preetha et al. “TriRE: A Multi-Mechanism Learning Paradigm for Continual Knowledge Retention and Promotion.” NeurIPS. 2023.
[3] Buzzega, Pietro, et al. "Dark experience for general continual learning: a strong, simple baseline." NeurIPS. 2020.
Once again, we express our gratitude to the reviewer for the valuable feedback provided. If there are any remaining concerns, please inform us. We are prepared to provide additional information to ensure a thorough understanding of our work.
---
Rebuttal 2:
Title: Supplementary experiments on the Split ImageNet-R dataset
Comment: We have completed the experiments with other methods on the Split ImageNet-R dataset under the 20-task setting. The results are as follows:
| Methods | Class-IL Accuracy (%) | Task 10 | Task 11 | Task 12 | Task 13 | Task 14 | Task 15 | Task 16 | Task 17 | Task 18 | Task 19 | Task 20 |
|-------------------|-----------------------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|
| ER | 5.22 | 9.49 | 8.82 | 7.44 | 6.87 | 5.93 | 5.34 | 4.70 | 5.63 | 5.13 | 5.33 | 5.22 |
| CLFD-ER | 5.23 | 9.51 | 10.70 | 8.79 | 7.32 | 6.32 | 5.94 | 4.77 | 5.76 | 5.46 | 5.97 | 5.23 |
| DER++ | 9.47 | 16.37 | 15.23 | 9.58 | 12.15 | 11,31 | 10.64 | 9.79 | 9.78 | 9.66 | 6.10 | 9.47 |
| CLFD-DER++ | 10.05 | 18.04 | 17.64 | 12.16 | 15.66 | 12.56 | 12.13 | 11.00 | 10.48 | 10.17 | 7.86 | 10.05 |
| ER-ACE | 10.63 | 15.38 | 15.37 | 15.26 | 12.75 | 11.60 | 12.22 | 8.63 | 7.48 | 10.07 | 10.00 | 10.63 |
| CLFD-ER-ACE | 13.44 | 15.93 | 16.43 | 17.20 | 13.59 | 15.42 | 13.08 | 9.75 | 14.69 | 13.26 | 12.79 | 13.44 |
| CLS-ER | 8.72 | 6.83 | 6.38 | 6.09 | 6.63 | 6.38 | 7.17 | 6.91 | 7.16 | 7.35 | 8.07 | 8.72 |
| CLFD-CLS-ER | 11.12 | 9.77 | 10.30 | 9.99 | 10.64 | 10.72 | 11.32 | 11.14 | 11.22 | 11.36 | 11.43 | 11.12 |
These results provide a more comprehensive demonstration of the scalability of our framework. We have included a comprehensive analysis of the framework's scalability in the final version of the paper.
---
Rebuttal 3:
Title: Reply to Reviewer wmJG
Comment: We express our sincere appreciation for the thoughtful feedback offered by the reviewer and the constructive evaluation of our work. We have addressed all the technical details and typographical issues as per your suggestions. Additionally, following your recommendation, we conduct a detailed comparison and ablation study on the other hyperparameters presented in Section E.2. Since our framework can be integrated with various continual learning methods, we do not use grid search; instead, we individually investigate the impact of each hyperparameter on continual learning performance. We focus on the CLFD-ER-ACE method for this study, as it performs well across all datasets and does not introduce any additional hyperparameters. We conduct tests on the S-CIFAR-10 dataset with a buffer size of 50. The accuracy results under the Class-IL setting are as follows:
| $\beta_c$ in Eq. 5 | 0.5 | 1 | 1.5 | 2 | 2.5 | 3 | 3.5 | 4 | 4.5 | 5 |
|-------------------------------|-----|---|-----|---|-----|---|-----|---|-----|---|
| CLFD-ER-ACE | 50.70 | 50.19 | 51.47 | 52.74 | 50.29 | 52.48 | 51.65 | 51.41 | 50.73 | 52.36 |
| $\lambda$ in Eq. 4 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 |
|-------------------------------|-----|-----|-----|-----|-----|-----|-----|-----|-----|
| CLFD-ER-ACE | 50.85 | 52.08 | 51.24 | 51.68 | 52.74 | 51.77 | 50.81 | 50.68 | 51.81 |
| $\mathcal{E}$ in CFFS | 5 | 10 | 15 | 20 | 25 | 30 | 35 | 40 | 45 |
|---------------------------------|----|-----|-----|-----|-----|-----|-----|-----|-----|
| CLFD-ER-ACE | 52.06 | 52.27 | 52.16 | 52.74 | 52.43 | 52.18 | 52.65 | 52.64 | 52.01 |
Our choice of hyperparameters in the paper yields optimal performance, and our framework demonstrates robustness to these hyperparameter selections. We hope that these ablation studies will provide the reviewer with a more comprehensive understanding of our work. We once again extend our gratitude for your thoughtful and highly constructive feedback.
---
Rebuttal Comment 3.1:
Comment: Thank you for your timely response, which provides a more comprehensive understanding of the proposed method. Given these additional experiments, I have increased my rating to 6: Weak Accept. I also appreciate the authors' patient and detailed response, and I have no further questions. | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewers' valuable comments and concerns. In the response below, we hope to have addressed all the points raised. Should there be any further questions or clarifications needed, please inform us so we can fully address any aspects of the paper during the rebuttal period. We plan to use additional pages to provide detailed clarifications on the issues raised by the reviewers, as outlined in the following responses. In this general response, we have clarified the reviewers' concerns regarding the accuracy of the proposed method with larger buffer sizes and on other datasets.
# The performance of our framework on larger buffer sizes.
We maintained the same buffer size selection as the state-of-the-art efficient continual learning method, SparCL [1], and conducted re-experiments on the Split CIFAR-10 and Split Tiny ImageNet datasets. Table 1 in the attached PDF presents the experimental results, demonstrating that our method still significantly enhances the performance of various rehearsal-based continual learning methods. It is important to note that while improving accuracy is a notable outcome, it is not the primary goal of this work. The novelty of this work lies in introducing the concept of continual learning in the frequency domain, which effectively reduces the training time and peak memory usage of continual learning methods, thereby facilitating their application on edge devices.
[1] Wang, Zifeng, et al. "Sparcl: Sparse continual learning on the edge." NeurIPS. 2022.
# The performance of our framework on other datasets.
We appreciate the reviewers' suggestion that incorporating larger datasets can enhance the credibility of our results. However, we consider Split Tiny ImageNet to be a substantial dataset. Numerous recent studies [1,2] have utilized Split Tiny ImageNet as the largest dataset in their experiments. Additionally, Mammoth CL [3], a widely adopted continual learning repository, also uses Split Tiny ImageNet as its largest dataset.
To further enhance the credibility of our results, we conducted additional experiments on Split ImageNet-R. We tested the Class-IL accuracy of each task with a buffer size of 500. Figure 1 in the attached PDF presents the experimental results, demonstrating that our method still significantly enhances the performance of various rehearsal-based continual learning methods, even on more complex datasets. It is worth noting that the improvement in accuracy is not the sole advantage of our framework. By integrating our framework with rehearsal-based methods on the Split ImageNet-R dataset, training speed increased by up to 1.7 $\times$, and peak memory usage decreased by up to 2.5 $\times$. This demonstrates that our framework can significantly enhance the training efficiency of continual learning, thereby promoting its application on edge devices.
[1] Gao, Qiang, et al. "Enhancing knowledge transfer for task incremental learning with data-free subnetwork." NeurIPS. 2023.
[2] Vijayan, Preetha et al. “TriRE: A Multi-Mechanism Learning Paradigm for Continual Knowledge Retention and Promotion.” NeurIPS. 2023.
[3] Buzzega, Pietro, et al. "Dark experience for general continual learning: a strong, simple baseline." NeurIPS. 2020.
Pdf: /pdf/36840d073ad24adba9ead71ecf4ae36ae7a4a87e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
What If the Input is Expanded in OOD Detection? | Accept (poster) | Summary: In previous OOD detection methods, extracting discriminative information from OOD data relative to ID data is challenging with the representation of a single input. This paper provides a novel perspective for out-of-distribution (OOD) detection by leveraging multiple types of corruptions to expand the original single input space. Utilizing the interesting phenomenon of confidence mutation, the authors introduce a new scoring method termed CoVer, which averages the confidence scores measured from multiple input spaces, achieving better separability between ID and OOD distributions. Extensive experimentation underscores that the CoVer approach not only outperforms previous methods in both DNN-based and VLM-based OOD detection benchmarks, but also exhibits commendable compatibility when combined with different OOD detection methods. Moreover, the CoVer technique demonstrates exemplary adaptability across diverse VLM architectures.
Strengths: 1. Well written and technically sound. This paper is well organized and written. Both the motivation and the main contributions of the proposed method are easy to follow. Sufficient experiments have been conducted to demonstrate the effectiveness of the proposed method.
2.Novelty. The challenges of the previous methods are well summarized in OOD detection, i.e., single input constrains the representation ability for detection. The proposed method expands the input space to aware the ID and OOD distribution effectively.
3. Extensive experiments and ablations. The paper conducts extensive experiments, including OOD detection using different backbones, hard OOD detection, and various aspects of ablation studies. The performance improvements strongly verify the effectiveness of the proposed method.
4. Clear justification and theoretical analysis. The paper provides a clear and intuitive justification for the proposed method by a thorough theoretical analysis. This analysis clearly elucidates the mechanisms beyond CoVer, making the paper convincing and compelling.
Weaknesses: 1.Lack explaination and comparison. Different types of corruptions are used in the method, which is similar to the idea of Watermarking (Wang, et al. 2022), where a generic watermark is trained to distinguish between ID and OOD data. The advantages are not deeply analyzed.
2.Less clarification. The CoVer approach is flexible and compatible and can be combined with different methods to achieve performance gains. The combination is not clearly presented for different methods.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1.Watermaking[1] is a very popular and SOTA method for OOD detection. Compared with watermarking and so on, what are the advantages of using corrupted images to expand the input space. Since the corrupted images are the important data in the proposed method, it is necessary to present the advantages for OOD with corrupted images more clearly.
[1]Wang Q, Liu F, Zhang Y, et al. Watermarking for out-of-distribution detection. Advances in Neural Information Processing Systems, 2022, 35.
2. The proposed method can enhance most of the OOD methods and improve their performance. However, these methods usually designed with different techniques, such as fine-tuning and zero-shot methods. Hence, I wonder whether there exsit differences to improve these methods with CoVer and how to guarantee the effectiveness when combine CoVer with these methods.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time devoted to reviewing this paper and your constructive suggestions. Here are our detailed replies to your questions.
> **W1, Q1:** Lack explaination and comparison. Different types of corruptions are used in the method, which is similar to the idea of Watermarking, where a generic watermark is trained to distinguish between ID and OOD data. The advantages are not deeply analyzed. Watermaking is a very popular and SOTA method for OOD detection. Compared with watermarking and so on, what are the advantages of using corrupted images to expand the input space. Since the corrupted images are the important data in the proposed method, it is necessary to present the advantages for OOD with corrupted images more clearly.
Thank you for your constructive comments. **We have added the analysis about the Watermarking method with our CoVer in the following two aspects.**
Conceptually, we have noticted that Watermarking utilizes a well-trained mask to help the original images be distinguishable from the OOD data. However, **Watermarking is still trying to excavate the useful feature representation in a single-input perspective**. In contrast, the critical distinguishable point and also the advantage of our CoVer method lies in input dimension expansion with the corrupt variants, which instead provide a extra dimension to explore the confidence mutation to better identify the OOD samples.
Experimentally, we have conducted the comparison and report the results in **Table 9 in attached PDF**. The results show that, on the one hand, training an optimized watermarking for effectively distinguishing between ID and OOD samples is a **time-consuming process**. On the other hand, CoVer achieves this by introducing corrupted inputs to capture the confidence variations between ID and OOD data during the test phase, which is **simpler**, **faster**, and **more effective**. We will add this part in our revision and discuss them in detail in our appendix.
> **W2, Q2:** Less clarification. The CoVer approach is flexible and compatible and can be combined with different methods to achieve performance gains. The combination is not clearly presented for different methods. The proposed method can enhance most of the OOD methods and improve their performance. However, these methods usually designed with different techniques, such as fine-tuning and zero-shot methods. Hence, I wonder whether there exsit differences to improve these methods with CoVer and how to guarantee the effectiveness when combine CoVer with these methods.
Thank you for your thoughtful question. The fine-tuning methods designed for OOD detection focus on **utilizing extra auxiliary outliers** to regularize the model to be better aware of OOD data, while the zero-shot methods are focusing on **scoring function design** of excavating the original distinguisable feature that can better represent the differences. Although their focus are conceptually different, **our CoVer provides the similar enhancement as the adapation is on the input side** (i.e., adding the corrupted input dimension for score averaging). It sheds new light on leveraging raw input in an extra dimension, **but is not bound to either fine-tuning or zero-shot framework** that targets on utlizing or enhancing the single-input representativeness for OOD detection task. Thus, we have found **limited significant differences** when CoVer combined with these two methods, both in their high-level adapation and in our empirical results.
**To guarantee the effectiveness of CoVer**, we recommend to use an additional validation set (e.g., the SVHN dataset used in our exploration) as the selection basis for choosing the effective corruption types and severity level, similar to previous works like Watermarking to learn a optimal mask using the validation set.
---
Rebuttal Comment 1.1:
Comment: Thank you for the feedback. I think the rebuttals do address my concern and would like to vote for accepting this paper.
---
Reply to Comment 1.1.1:
Title: Thanks for your positive feedback!
Comment: Thank you very much for the positive feedback! We are glad to hear that our response solved your concerns and will incorporate all suggestions in the revision. | Summary: This paper first identifies the shortcoming of previous out-of-distribution (OOD) detection methods: the single-input paradigm limits the representation dimension for extracting valid information. To address this issue, a novel method CoVer that expands the original input space with multiple types of corruption is proposed. Based on the phenomenon of confidence mutation, CoVer achieves better performance by averaging the confidence scores across different dimensions. The proposed method achieves the best performance in the extensive experiments.
Strengths: 1. The motivation of the paper is easy to follow. The idea of using common corrupted images to expand the input space is both novel and reasonable. While previous methods based on single input often struggle with detecting hard-to-distinguish OOD samples, the proposed method offers a practical approach to improving the performance of OOD detection.
2. The extensive experiments demonstrates the solid contributions of the proposed work and significant improvements have ahieved for different methods. Using the ImageNet-1K benchmark, the experiments validate the compabability of the proposed CoVer method effectively.
3. The proposed method is designed with robust theoretical foundation of confidence mutation and the proposed score function. The analysis is presented clearly and accessibly for the novel insights, making it easy to understand the mechanism of the proposed framework.
Weaknesses: 1. Although the proposed method can improve the performance of the SOTA methods, whether they face the same challenges and how does the proposed method improve each method are not clearly explaiend.
2. For the different methods, what types of the inputs can improve them effectively lacks of deeply analysis.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Since the proposed method is a framework to improve the OOD performance, I wonder how does the proposed CoVer method improve each method? Why the proposed method is effective to improve different types of the method should be further explaiend.
2. For the proposed method, it mainly focuses on expanding the inputs. While for the different method, they usually focus on different knowledge. I wonder how to decide what types of the inputs that can improve the performance of different methods.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time devoted to reviewing this paper and your constructive suggestions. Here are our detailed replies to your questions.
> **W1, Q1:** Although the proposed method can improve the performance of the SOTA methods, whether they face the same challenges and how does the proposed method improve each method are not clearly explained. Since the proposed method is a framework to improve the OOD performance, I wonder how does the proposed CoVer method improve each method? Why the proposed method is effective to improve different types of the method should be further explained.
Thanks for your constructive comments. **First,** we would like to state that almost all the SOTA methods considered in our work face **the same challenge on excavating discriminative representation for OOD detection.** Although the SOTA methods are designed with different advanced techniques (e.g., ReAct, DICE, and ASH integrates the activation regularization or reshaping to the forward path of a single input in DNNs; MCM, LoCoOp, CLIPN, and NegLabel explore the CLIP's representation of a single input to detect OOD samples in VLMs), we notice that **the single input may implicitly constrain the representation dimension** for detection as the discriminative features are not infinite to utilize.
**Second,** our CoVer introduce the extra dimensions with corruptions to **reveal the intrinsic distinguishes of ID and OOD samples.** In the multi-inputs space, ID data maintains an overall higher confidence expectation, whereas OOD data encounters notable changes in model confidence since the high-frequency features are altered as verified in our analysis. Leveraging the difference trends between ID and OOD samples by a simple but effective averaging operation, our CoVer can further improve the ID-OOD separability when combined with those SOTA methods (i.e., adding the corrupted input dimension for scores averaging).
> **W2, Q2:** For the different methods, what types of the inputs can improve them effectively lacks of deeply analysis. For the proposed method, it mainly focuses on expanding the inputs. While for the different method, they usually focus on different knowledge. I wonder how to decide what types of the inputs that can improve the performance of different methods.
Thank you for your thoughtful question. For the types of corrupted inputs and their corresponding severity levels, we have conducted some related explorations (**e.g., Tables 10 and 11, Figures 6 and 7**) in the Appendix of our original submission for performance references. We have noticed that some specific corruptions (**e.g., Brightness, Fog, Contrast, Motion Blur, Defocus Blur**) can generally improve the OOD detection performance. It can be found **those types provide corruptions on the non-semantic level of the input**, instead of damaging the semantic feature too much like the other types. It is also empirically verified in our previous analysis about confidence mutation, are induced by the high-frequency feature changes. However, to further efficiently **determine which corruption types to choose** for expanding the input dimension, it is recommended to rely on an additional validation set (e.g., the SVHN dataset used in our exploration) as the selection basis, similar to the hyperparameter selection process in previous works.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response, which addresses my concerns. After reviewing other comments, I still believe this paper is technically solid and novel. Therefore, I maintain my score.
---
Reply to Comment 1.1.1:
Title: Thanks for your positive support!
Comment: Thank you very much for your positive support after reading our response! We are glad to hear that our response addressed your concerns and that you find our paper technically solid and novel. We will make sure to incorporate all of your suggestions in the revision. | Summary: Authors ntroduce a new approach to out-of-distribution (OOD) detection by expanding input representation dimensions using common corruptions. Traditional methods focus on single-input representations, limiting their effectiveness. This work identifies "confidence mutation," where OOD samples' confidence levels drop significantly under corruptions, while in-distribution (ID) data maintains higher confidence due to resistant semantic features.
The proposed method, Confidence aVerage (CoVer), averages confidence scores from original and corrupted inputs to improve OOD detection. Extensive experiments show that CoVer enhances performance across various benchmarks. Key contributions include:
1. Expanding input representation dimensions for better OOD detection.
2. Introducing confidence mutation to distinguish OOD data.
3. Proposing the CoVer scoring method for improved separability.
4. Validating CoVer's effectiveness through extensive experiments
This is a nice tricks that are many connection to others fields such a s NLP.
Strengths: Nice idea which have not been applied to OOD detection but is well known on other fields such as NLP (e.g . retrieval)
Weaknesses: **Omission of Data Depths and Information Projections in Related Work**
Data depths and information projections have shown significant promise in the OOD detection community due to their ability to provide robust and high-dimensional representations of data. Data depths, in particular, are well-suited for this problem as they project in all possible directions, capturing the entire distribution's structure and providing a natural fit for OOD detection tasks. Despite their relevance, these approaches are notably absent from the related work section of this paper. Including them could provide a more comprehensive overview of the field and strengthen the contextual foundation of the study. Relevant works that should be considered include:
M. Darrin. "Unsupervised Layer-wise Score Aggregation for Textual OOD Detection."
M. Picot. "A Halfspace-Mass Depth-Based Method for Adversarial Attack Detection." TMLR 2023.
P. Colombo. "Beyond Mahalanobis Distance for Textual OOD Detection." NeurIPS 2022.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why were data depths, which project onto all possible directions and are naturally suited for OOD detection, not explored in your related work?
2. Why were classical methods like Isolation Forest, which are well-known for their anomaly detection capabilities, not considered?
3. Can you compare your results against the previously introduced baselines involving data depths, information projections, and classical methods like Isolation Forest?
To me the paper is incomplete and the positioning as well as the choice of the methods (react, dice) is poor and deserve the paper, whereas classical ML tools are appropriated.
I encourage the authors to revise.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time devoted to reviewing this paper and your constructive suggestions. Here are our detailed replies to your questions.
> **W1:** Omission of Data Depths and Information Projections in Related Work
Thanks for your valuable suggestions and bringing us those insightful works of data depths and information projections. Here, **we have concluded a related discussion about these studies** in the following.
Computing OOD scores on the embedding output of the last layer of the encoder is not the best choice for textual OOD detection. To address this, [1] proposed aggregating OOD scores across all layers and introduced an extended text OOD classification benchmark, MILTOOD-C. In a similar vein, RainProof [2] introduced a relative information projection framework and a new benchmark called LOFTER on text generators, considering both OOD performance and task-specific metrics. Building on the idea of information projection, REFEREE [3] leveraged I-projection to extract relevant information from the softmax outputs of a network for black-box adversarial attack detection. On the other hand, APPROVED [4] proposed to compute a similarity score between an input sample and the training distribution using the statistical notion of data depth at the logit layer. HAMPER [5] introduced a method to detect adversarial examples by utilizing the concept of data depths, particularly the halfspace-mass (HM) depth, known for its attractive properties and non-differentiability. Furthermore, TRUSTED [6] relied on the information available across all hidden layers of a network, leveraging a novel similarity score based on the Integrate Rank-Weighted depth for textual OOD detection. LAROUSSE [7] employed a new anomaly score built on the HM depth to detect textual adversarial attacks in an unsupervised manner.
**We will add the above to the related work section for providing a comprehensive overview, and also add the formal citations for reference in our revised version.**
> **Q1:** About data depths
Thanks for your question. **We have discussed the related work on data depths in the previous response and will add it to the related work section in our revised version**. Since the discussion of the related work in our original submission mainly refers to recent representative reviews of visual OOD detection, like [8] and [9], we conduct limited exploration on this direction. Although our CoVer has conceptual distinguishes with exploring data depths for OOD detection, it is notable that **both provide new perspective for exploring discrimnative features** between the ID and OOD samples beyond the raw inputs.
> **Q2:** About Isolation Forest
Thanks for the question. We have noticed that Isolation Forest, though a classical method of anomaly detection (AD), is not a common baseline considered in previous literatures [8, 9] for OOD detection. **In fact, AD is quite different from OOD detection, as AD treats ID samples as a whole [9]**. This means that regardless of the number of classes (or statistical modalities) in ID data, AD does not require differentiation within the ID samples (while OOD detection does). OOD detection considers the knowledge for all the known classes, while AD mainly learns the normal patterns from the majority of data and identifies the anomalies. In our work, **we select the baseline methods following previous well-recognized studies**, such as DNN-based methods (e.g., ReAct, ASH, DICE) and VLM-based methods (e.g., MCM, CLIPN, NegLabel).
> **Q3:** Comparison with data depths, information projections, and Isolation Forest
Thank you for the question. **We have conducted comparison experiments between our CoVer and baselines as you mentioned**, as detailed **in Table 8 in attached PDF**.
Due to the large scale of the ImageNet training set, we sampled 50 samples per class to construct a subset from the training data to represent the training distribution, as recommended by the similar work named NNGuide [10]. For data depths, we reimplemented APPROVED [4] for comparison. For information projections, we reproduced REFEREE [3] for comparison. For Isolation Forest, we use logits as the input to detect the anomaly logits in ID and OOD samples.
The results indicate that AD and textual OOD detection methods, such as Data Depth and Information Projection, may not suit for visual OOD detection tasks, a view also mentioned in related surveys [8, 9]. Similarly, classical ML methods for AD, such as Isolation Forest, seem to be failed to excavate discriminative representations when applied to image OOD detection. However, since these methods are insightful in distinguishing the outliers, we believe **it is worth further efforts in the future to adopt the critical intuition into the OOD detection problem.**
**References:**
[1] Darrin M, Staerman G, Gomes E D C, et al. Unsupervised Layer-wise Score Aggregation for Textual OOD Detection. In AAAI, 2024.
[2] Darrin M, Piantanida P, Colombo P. ainproof: An Umbrella To Shield Text Generators From Out-Of-Distribution Data. In Arxiv, 2022.
[3] Picot M, Noiry N, Piantanida P, et al. Adversarial Attack Detection Under Realistic Constraints. 2022.
[4] Picot M, Staerman G, Granese F, et al. A Simple Unsupervised Data Depth-based Method to Detect Adversarial Images. 2023.
[5] Colombo P, Picot M, Granese F, et al. A Halfspace-Mass Depth-Based Method for Adversarial Attack Detection. TMLR, 2023.
[6] Colombo P, Dadalto E, Staerman G, et al. Beyond Mahalanobis Distance for Textual OOD Detection. In NeurIPS, 2022.
[7] Colombo P, Picot M, Noiry N, et al. Toward stronger textual attack detectors. In Arxiv, 2023.
[8] Yang J, Wang P, Zou D, et al. Openood: Benchmarking generalized out-of-distribution detection. In NeurIPS, 2022.
[9] Yang J, Zhou K, Li Y, et al. Generalized out-of-distribution detection: A survey. In IJCV, 2024.
[10] Park J, Jung Y G, Teoh A B J. Nearest neighbor guidance for out-of-distribution detection. In ICCV, 2023.
---
Rebuttal 2:
Title: Would you mind checking our responses and confirming whether you have any further questions?
Comment: Dear Reviewer HCvU,
Thanks very much for your time and valuable comments on our work.
In the rebuttal, we have tried our best to address the concerns, and provided detailed responses to all your comments and questions. Would you mind checking our responses and confirming if there is any unclear point so that we could further clarify?
Best regards,
Authors of Submission 1367
---
Rebuttal 3:
Title: [Invitation to discussion] Need further clarification?
Comment: Dear Reviewer HCvU,
Thanks very much for your time and valuable comments.
Thanks again for your time and valuable comments. We have tried our best to address the concerns. Specifically, we
- conclude a related discussion about the provided studies and cite them in our manuscript. (W1)
- discuss data depths in our related work and add it to our manuscript. (Q1)
- discuss isolation forest and clarify the difference between anomaly detection and OOD detection. (Q2)
- verify the performance of CoVer with extra comparison experiments with the introduced baselines. (Q3)
Is there any unclear point so that we should/could further clarify?
Thanks for your attention and best regards,
Authors of Submission 1367 | Summary: The paper aims to identify Out-Of-Distribution (OOD) samples by applying common image corruptions (noise, blur, etc.) to the input. The phenomenon is referred to as confidence mutation, where original inputs, along with corruptions, increase the confidence of in-distribution (ID) data, while the confidence of OOD data decreases. Also, a new scoring function is proposed, Confidence aVerage (CoVer), which averages the OOD scores of different corrupted inputs and the original one.
Experiments are on 2 benchmarks (1) traditional OOD - ImageNet-1K as In-Distribution (ID) and iNaturalist, SUN, Places, and Textures as OOD (2) zero-shot OOD - mixture of ImageNet-10, ImageNet-20, ImageNet-100, WaterBirds, and Spurious. The metrics used for evaluation are AUROC and FPR95.
Methods exhibit better performance with the addition of CoVer.
Strengths: The use of corruptions to improve OOD performance is creative. The implementation seems straightforward and does not require extra datasets. Also, CoVer seems to improve every method it's applied to.
The experiments utilize similar backbones/datasets as in the literature. This is helpful when comparing with other methods.
All figures in the paper, including Fig. 1-4, are intuitive and easy to follow. The captions are detailed, and the overview (Fig. 4) makes understanding CoVer very straightforward.
The paper is well-written and clear. It effectively discusses the problem, presents the idea, and describes the experiments in a coherent manner. Additionally, all equations are clearly explained.
Weaknesses: The experimentation/idea is somewhat weak and seems more like an application rather than significant knowledge advancement. For example, CoVer is not standalone but applied to an existing method like ASH (see Table 1) or on top of a VLM, thereby leveraging the performance of other methods. Additionally, the claimed contribution of a novel scoring method doesn't seem entirely novel, as averaging is straightforward and an existing scoring function like Softmax/Energy is subsequently employed.
Comparisons in Table 1 with different baselines is a little unfair because CoVer makes use of "extra" data (corrupted images), while others simply use a single input/image.
Evaluating OOD performance when using VLM/CLIP models is problematic. Such models utilize a vast amount of data which makes it near impossible to know what images/classes have been seen before and are considered "in-distribution". While the community continues to use them in various settings, their use with OOD problems should be a clear limitation.
I find it unclear how to determine which corruption types or severity levels to use if I implemented CoVer. The results suggests variability/inconsistency without a clear guideline for decision-making.
The runtime of CoVer is not discussed. From my understanding, one would need to pass every corrupted input and extract its features, which is time consuming and uses more space.
There is no discussion of failure cases. When using corrupted inputs, I assume there are instances when CoVer labeled an ID image as OOD.
Since the OOD datasets used in Table 1 differ significantly from ImageNet, it would be intriguing to observe CoVer's performance on more challenging ones such as NINCO [1] (ensures no categorical contamination) without leveraging CLIP.
The experiments do not utilize or compare with NNGuide [2] or MaxLogit [3], more recent and better performing methods than those in the paper.
[1] Bitterwolf, Julian, Maximilian Müller, and Matthias Hein. "In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation." International Conference on Machine Learning. PMLR, 2023.
[2] Park, Jaewoo, Yoon Gyo Jung, and Andrew Beng Jin Teoh. "Nearest Neighbor Guidance for Out-of-Distribution Detection." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
[3] Hendrycks, Dan, et al. "Scaling Out-of-Distribution Detection for Real-World Settings." International Conference on Machine Learning. PMLR, 2022.
Technical Quality: 3
Clarity: 3
Questions for Authors: In Table 1, why is CoVer only combined with ASH and not other methods?
Table 6 in the appendix demonstrates that CoVer is integrated with other methods using different types of corruptions and severity levels for each method. This raises questions about the comparisons. Why are the types and levels of corruptions not the same across all methods?
Since the paper has different ablations (various corruption types, number of dimensions/corrupted inputs, severity levels) and the experiments make an explicit choice on each one, was a validation set ever used? I did not see anything in the code or the paper, indicating choices were tuned on the test set.
Considering that other methods only use one input, how much additional runtime does CoVer add? For example, given the size of ImageNet-1K, using CoVer would necessitate extracting 6 million features (1.2 million * 5 corruptions).
How effective would CoVer be if it encountered much more difficult OOD data without leveraging VLMs/CLIP?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors discussed limitations and broader impact in Section Appendix E.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time devoted to reviewing this paper and your constructive suggestions. Here are our detailed replies to your questions.
>**W1:** About Novelty
We would like to reclaim that our work is novel in introducing a new perspective, i.e., expanding the dimension of input with corrupted variants, for OOD detection. **This has not been explored in previous methods [1, 2]**, and has both **conceptual and techniqual distinguishes** from them with significant knowledge advancement.
**Conceptually,** we are the first to expand the raw input with the corrupted ones and identify the phenomenon of confidence mutation, and reveal the underlying mechanism with high-frequency feature change. **The multiple input dimensions introduced in our framework** is the critical advance compared with previous ones focusing on single-input.
**Techniqually,** CoVer captures the intrinsic differences of ID and OOD samples using the averaging operation, as founded to be **simple yet highly effective** and evidenced by our experiments in **Table 1,2,3 of the original submission**. As our idea provides a different perspective, the combination with these SOTA methods aims to demonstrating that **our CoVer reveals the additional enhancement not covered by the basis**. Similar to previous methods (ASH, DICE, ReAct) originate from Softmax/Energy, we believe our work brings new insights on this problem.
> **W2:** About "extra data"
Thank you for the question. We would like to clarify that **it is our novelty to use extra corrupted inputs to expand the input space, and it is not unfair.** While previous OOD detection methods have distinguished between ID and OOD data based on a single input, **we are the first to identify the challenges in this paradigm and propose expanding it to multiple inputs**, measuring their confidences respectively. With these novel insights, CoVer **provides a new perspective** for the OOD detection problem.
> **W3:** About using VLMs/CLIP
Thank you for the insightful comments.
First, we have conducted various experiments of CoVer on the **DNNs architecture (ResNet50)**. The results demonstrate performance improvements and **exhibit the same trend as those on VLMs** (**refer to Table 1 and Table 3 of the original submission**).
Second, we would like to explain that **the zero-shot settings in our work follow the MCM [3]**, which is the pioneering work in zero-shot OOD detection with VLM. As in MCM, the in-distribution classes in zero-shot OOD detection are the classification task of interest, which is defined by a set of class labels/names $Y_{in}$ instead of the classes used in pre-training. Accordingly, OOD is defined w.r.t. the ID classes, not the data distribution during pre-training. Hence, we only utilize the powerful image encoder to extract visual features and the text encoder to extract textual embeddings as ID concept prototypes, without the prior knowledge for distinguishing ID and OOD classes. We will add this part of explanation to our paper.
> **W4, Q2:** About which corruption types or severity levels to use
Thanks for your constructive comments.
First, in all experiments, we use the **SVHN dataset as the validation set** to select the most effective corruption types for each method. **We will clearly explain the usage of the validation set in our main paper**.
For the types of corrupted inputs and their corresponding severity levels, we have conducted related explorations (**e.g., Tables 10 and 11, Figures 6 and 7 of our original submission**) for performance references. Some specific corruptions (**e.g., Brightness, Fog, Contrast, Motion Blur, Defocus Blur)** can generally improve the OOD detection performance, as the corruptions are mainly on the non-semantic level of the input, instead of damaging the semantic features too much like the other types.
Empirically, refer to **Table 3 in attached PDF**, we can use **the same type of corruption** as the expanded input (e.g., here Brightness with severity 1 used) to perform better than the original version. This provides the verification of the previous intuition about the general guidance for choosing appropriate corruption types, and understanding dimension expansion for OOD detection.
> **W5, Q4:** About runtime
Thank you for your valuable question. Runtime is indeed an issue we didn't consider. As you mentioned, if there are $N$ expanded dimensions, it will take $N$ times the duration of a single input to implement CoVer. However, **our CoVer is only applied in the inference phase of OOD detection**, and it is **generally fast**. As shown in **Table 4 in attached PDF**, we report the inference time of each single input on ID and OOD datasets. **We will clearly discuss the runtime issue in the revised version.**
> **W6:** About failure cases
Thank you for the constructive comment. We indeed identified some failure cases and reported them in **Figure 7 of the original submission**. When CoVer utilizes certain severe corruption types (e.g., Spatter, Elastic transform), its performance is worse than with single input. This is because these types are more severe compared to others, leading to **excessive damages to semantic features**. Effective corruption types are those **only perturb non-semantic features**, which generally exist at the high-frequency level, resulting in different confidence variations between ID and OOD data. We will incorporate these discussions into the revised version and make it clearer with our empirical results.
**References**:
[1] Yang J, Wang P, Zou D, et al. Openood: Benchmarking generalized out-of-distribution detection. In NeurIPS, 2022.
[2] Yang J, Zhou K, Li Y, et al. Generalized out-of-distribution detection: A survey. In IJCV, 2024.
[3] Ming Y, Cai Z, Gu J, et al. Delving into out-of-distribution detection with vision-language representations. In NeurIPS, 2022.
> **W7, Q5, W8, Q1, Q3**
Due to the space limit, we place our answer in the general response.
---
Rebuttal 2:
Title: Would you mind checking our responses and confirming whether you have any further questions?
Comment: Dear Reviewer LFoY,
Thanks very much for your time and valuable comments on our work.
In the rebuttal, we have tried our best to address the concerns, and provided detailed responses to all your comments and questions. Would you mind checking our responses and confirming if there is any unclear point so that we could further clarify?
Best regards,
Authors of Submission 1367
---
Rebuttal 3:
Title: [Invitation to discussion] Need further clarification?
Comment: Dear Reviewer LFoY,
Thanks again for your time and valuable comments. We have tried our best to address the concerns. Specifically, we
- reclaim our novelty from both conceptual and technical perspectives (W1);
- clarify the fair comparison in using CoVer's novel expanded inputs. (W2);
- discuss the rationale for using VLM/CLIP models in OOD detection experimentally and definitionally. (W3);
- clarify the usage of the validation set and provide general guidance for choosing appropriate corruption types empirically. (W4, Q2, Q3)
- discuss the runtime issue with extra experimental results. (W5)
- further clarify the failure cases in practice. (W6)
- verify the performance of CoVer on NINCO dataset with extra experiments. (W7, Q5)
- verify the performance of CoVer with comparison with NNGuide and MaxLogit methods. (W8)
- explain the reason for combining only with ASH and demonstrate the effectiveness of combining other methods empirically. (Q1)
Is there any unclear point so that we should/could further clarify?
Thanks for your attention and best regards,
Authors of Submission 1367
---
Rebuttal Comment 3.1:
Comment: Thank you for your response. While I appreciate the enthusiasm of authors for feedback, I have other commitments in addition to the NeurIPS reviews. I will get back to you as soon as possible, and I kindly ask for your patience in the meantime.
---
Rebuttal 4:
Title: Rebuttal Acknowledgement
Comment: I thank the authors for taking time to submit a rebuttal and providing some clarity.
> General Response (NINCO, NNGuide and MaxLogit, ASH)
Thanks for the comparison with more recent data and methods.
Using SVHN as a validation set is an interesting choice because the data is much smaller (32x32) than ImageNet (ID set) or any of the OOD datasets.
> Knowledge Advancement
My comment should have indicated that the advancement in knowledge is not as significant, rather than suggesting that the idea itself is weak. CoVer requires [1] (corruptions and severity levels) and must be applied to an existing OOD method. Other work [2] has utilized [1] in OOD detection, but simply as an attack. The contribution in terms of knowledge advancement or new perspective is using a normal image in conjunction with a corrupted image/s for post-hoc OOD detection enhancement by averages scores.
Utilizing CoVer would require applying it to every method for a fair comparison rather than using it as a standalone technique, as it involves more than a single input. This isn't necessarily bad or wrong, but it is an important consideration when determining its impact. Its application does not alter performance rankings, and one would generally expect performance to improve with the use of multiple inputs.
> Extra data
I believe this should be listed as a limitation because, for CoVer to be effective, it requires more than one input compared to other methods. Similar to how OpenOOD distinguishes between "Training Methods" and "Training With Extra Data," CoVer would need to be categorized as "Post-hoc Methods With Extra Data." This is why I mentioned that Table 1 seemed a bit unfair, as it compares methods using one input vs. two or more (CoVer).
> VLMs/CLIP
I am aware how MCM and OOD are defined, but my point is an image encoder coming from a VLM/CLIP model is of course going to be better because it has seen more data than one with only ImageNet-1K. It is an advantage that should be noted.
> Corruption types / severity levels
The variability I mentioned arises because the experiments used different corruption types and/or severity levels for nearly every method. This makes comparisons less discernible. Also, it introduces variability depending on the architecture, dataset, and other factors.
The attached PDF demonstrates that using a single corruption type along with a normal input improves AUROC and FPR95 alone.
> Runtime
Thank you for attaching the runtime table. My concern is that if one were to use CoVer and explore corruption types and severity levels across different architectures and their own data, it could become time-consuming due to the amount of variability involved.
> Failure cases
This was my oversight, as I missed Figure 7. I was simply wondering in what situations the use of CoVer might not make sense and a single input would be preferred.
**References**:
[1] Hendrycks, D., & Dietterich, T. (2018). Benchmarking Neural Network Robustness to Common Corruptions and Perturbations. In International Conference on Learning Representations.
[2] Chen, J., Li, Y., Wu, X., Liang, Y., & Jha, S. (2021). ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining. In Machine Learning and Knowledge Discovery in Databases.
---
Rebuttal Comment 4.1:
Title: [1/2] Thanks for your feedback!
Comment: We appreciate the reviewer's engagement during the discussion phase. Below are our detailed response to your comments.
> About SVHN as the validation set
Thanks for acknowledging our results on advanced comparison.
We would like to clarify that it is necessary to **sclae the size** of SVHN data from **32 $\times$ 32 to 224 $\times$ 224** when using the SVHN dataset as the validation set on the ImageNet benchmark, as same to the usage in previous work like Watermarking [1]. This constraint ensures that the size of SVHN data can be consistent with the data size of ImageNet and other OOD datasets.
> About knowledge advancement
Thanks for the clarification on your concern.
First, **regarding the knowledge advancement**, we believe it is significant. While it is true that previous works used corruptions in OOD detection, e.g., as an attack [2], the key difference here lies in **our innovative use of normal and corrupted images in conjunction to enhance post-hoc OOD detection through average scores.** Futhermore, considering corruptions as attacks in [2], it also **needs to be applied within an existing OE framework and utilize the original OOD score**, same as our CoVer also requires integration with existing OOD scoring function, but with a distinct purpose and perspective.
Second, **concerning the impact and fair comparison**, we would like to clarify that **many other methods also need to be integrated with existing techniques and OOD scores** (e.g., DICE, ReAct, ASH in our previous response). Our approach, to combine with other OOD scoring functions and methods, **is intended to demonstrate the validity of our new perspective.** While it's true that utilizing multiple inputs could generally improve performance, the primary contribution of our work is the introduction of a novel approach that **offers a fresh insight on post-hoc OOD detection**.
> About extra data
Thanks for the suggestion.
First, we would like to conceptually reclarify that the corrupted forms of existing images is actually not the "extra data", since **extra data is generally considered as those that do not intersect with the existing samples in their semantic label spaces [3]**. For instance, outlier data used in outlier exposure (OE) based methods is regarded as "extra data", since they belongs to disjoint label spaces. However, our expanded inputs originate from the existing data, which shares the same label space but are corrupted.
Second, we would like to reclaim that our innovation lies in expanding the input with corruption, as highlighted in the title of our research. **Even if using two or more** original inputs **without corruptions**, these methods would yield similar results to those obtained with a single input, underscoring the importance of our approach. Our CoVer sheds the light of using corruptions to effectively expand the input, which sets us apart.
Nevertheless, we acknowledge this in our limitations, since our CoVer does introduce multiple inputs, which may increase computational costs. **However, this should not be viewed as conceptually unfair, as the advantage lies in the new perspective of our method rather than an inherent imbalance.** **For instance,** NegLabel [4] proposed introducing numerous negative words into CLIP (which is also like "extra data") to boost the performance of post-hoc OOD detection without labeling their method as 'Post-hoc Methods With Extra Data' or comparing it with other methods also expanded with negative labels. **This further highlights the importance of the new perspective compared with the fairness issue, similar to our current case.**
> About VLMs/CLIP
Thank you for your valuable suggestion. **We will clearly note the advantage of VLMs/CLIP used for OOD detection in the revised version.**
An additional note is that it is because CLIP's image encoder has seen more data and is more powerful that CLIP can be used for zero-shot OOD detection without any additional data for training.
> About corruption types / severity levels
Thanks for your acknowledgement!
As you mentioned, the lack of a clear guideline on how to select the proper types of corruption will result in negligible variability of CoVer in practice. Following your suggestion, **we will merge all the results into our manuscript with detailed introduction and discussion of the clear guideline on appropriate corruption types.**
**References:**
[1] Wang Q, Liu F, Zhang Y, et al. Watermarking for out-of-distribution detection. In NeurIPS, 2022.
[2] Chen J, Li Y, Wu X, et al. Atom: Robustifying out-of-distribution detection using outlier mining. In Machine Learning and Knowledge Discovery in Databases, 2021.
[3] Yang J, Zhou K, Li Y, et al. Generalized out-of-distribution detection: A survey. In IJCV, 2024.
[4] Jiang X, Liu F, Fang Z, et al. Negative label guided ood detection with pretrained vision-language models. In ICLR, 2024.
---
Rebuttal Comment 4.2:
Title: [2/2] Thanks for your feedback!
Comment: > About runtime
Thank you for your constructive comment.
**Technically,** we have provided the implementations of different corruptions in the submitted code, and **we also provided exploration on different corruption types and severity which can provide some general insights on choosing the corruption types in our previous response.** If someone were to use CoVer on different architectures and their own data, they could simply construct their own corrupted datasets and refer to the provided general guideline to select the appropriate corruption types. Additionally, we recommend to use the validation set for the performance tuning.
**As for the variability,** we have conducted various experiments on both mainstream and challenging OOD datasets (including iNaturalist, SUN, Places, Textures, NINCO), the results have all demonstrated that the provided general guideline (e.g., single expanded input using Brightness with severity 1) is always the better choice for implementing CoVer. As a result, **the variability that exists in implementing CoVer can be effectively minimized when a general guideline is available.**
As the common issue for those advanced scoring method design (e.g., DICE, ReAct, ASH in our previous response) introducing the manipulation on feature or parameter level adjustment, **we would also discuss it as one part of limitation in the revised version.**
> About failure cases
Thanks for the clarification.
As in our previous response, certain corruption types that **excessive damage to semantic features of images** would result in CoVer perform worse than the single input. This further suggests a potential guidance in choosing effective types of corruption which **perturb the non-semantic features**. **We will highlight this part in main text as to provide a comprehensive study for readers on this aspect.**
**Thanks again for your acknowledgement and the response in improving our work, we are happy to discuss further if there is any questions and remaining concerns.** | Rebuttal 1:
Rebuttal: ## General Response
We appreciate all the reviewers for their thoughtful comments and suggestions on our paper.
We are very glad to see that the reviewers find **our focused problem is important** (R3,R4) within the OOD detection research, and the method is **novel** (R1,R2,R3,R4) and **simple but adaptable** (R1,R2,R3,R4) to various other techniques, and the **experiments are good, comprehensive** and demonstrate the **general effectiveness** of our CoVer (R1,R2,R3,R4). We are also pleased that the reviewers find our writing and figures is **very clear** and **easy to understand** (R1,R4).
We have tried our best to address the reviewers' comments and concerns in **individual responses to each reviewer** with comprehensive experimental justification. The reviews allowed us to improve our draft and the contents added in the revised version and **the attached PDF** are summarized below:
**From Reviewer LFoY**
- Clarify and Discuss our main novelty and settings of CoVer. (see Section 1, 3.1, and 3.2 in the original submission, will highlight in these sections)
- Summarize and add results for expanded types and runtime issues (see Table 3,4 in PDF, will add in Appendix F)
- Discuss the failure cases of CoVer (will add and highlight in the limitation part of Appendix E)
- Add the experimental verification on NINCO dataset and compare with NNGuide and MaxLogit (see Table 1,2,5 in PDF, will add in Appendix F)
- Conduct comparison experiments on each mentioned DNN-based methods (see Table 6 in PDF, will add in Appendix F)
- Clarify and Explain the usage of validation set in CoVer (see Table 7 in PDF, will add and highlight in Section 4.1).
**From Reviewer HCvU**
- Discuss and add the reference for methods in related fields. (will add in Appendix B)
- Explain the differences of data depths, information projections and Isolation Forest, and their challenges in visual OOD detection. (will add in Appendix F)
- Add the experimental comparison with newly considered methods. (see Table 8 in PDF, will add in Appendix F)
**From Reviewer NZqw**
- Discuss and revise the statement for our critical observations and add more explanation (will revise in Section 1 and 3.1)
- Show our results about the effects of expanded types and provide some insights on choosing corruption types. (will add and highlight in Section 4.1)
**From Reviewer hcf7**
- Add the experimental comparison with Watermarking. (see Table 9 in PDF, will add in Appendix F)
- Discuss the generalization and effectiveness of CoVer. (see Section 3.1 in the original submission)
**We appreciate your comments and time!** We have tried our best to address your concerns and revised the paper following the suggestions. **Would you mind checking it and confirming if there are any unclear parts?**
---
### Some rest answers:
For **Reviewer LFoY**
> **W7, Q5:** Experiments on NINCO
Thank you for your valuable question. NINCO has proposed three OOD datasets with no categorical contamination which include NINCO, OOD unit-tests, and NINCO popular OOD datasets subsamples. Here, we evaluate the effectiveness of CoVer on these datasets in **Table 5 in attached PDF**. The results demonstrate that CoVer, when combined with ASH, consistently achieves better performance across the three NINCO OOD datasets.
> **W8:** Comparison with NNGuide and MaxLogit
Thanks for your valuable suggestion. We have conducted comparison experiments with NNGuide and MaxLogit to enrich our analysis in **Table 1 in attached PDF**.
First, our experimental results show that CoVer outperforms these competitive post-hoc methods on the ResNet50 architecture.
Second, the performance of these post-doc methods, especially NNGuide, **encounter significant drop when conducted on CLIP-B/16 architecture**. We believe the reason for the poor performance is **the difference in training data**. Many pos-hoc methods are designed on ImageNet pre-trained networks, where only ID data are used during training. In contrast, when training CLIP, both ID datas and OOD datas are used. This leads to different activations of OOD data. Another reason is that the pos-hoc method relies heavily on **the choice of hyperparameters**. The hyperparameters of NNGuide need to be re-selected on different models. Despite these issues, our CoVer can still perform better than these methods.
Furthermore, we also **combine our CoVer with MaxLogit and NNGuide** and report the results in **Table 2 in attached PDF**, which further demonstrates the effectiveness and compatibility of our method.
> **Q1:** why only combined with ASH, not the others?
Thank you for your question. In Table 1 in the original submission, we only reported the results of CoVer combined with ASH **because it best demonstrates the excellence of CoVer**. In **Table 3 in the original submission**, we also show the results of CoVer combined with DICE and ReAct, and CoVer can also provide performance gains for them. Here in **Table 6 in attached PDF, we further report the comparison of CoVer combined with each mentioned DNN-based methods** (adding MSP, ODIN, and Energy score), which strongly demonstrates its superiority.
> **Q3:** About validation set
Thank you for your insightful questions. We utilized **the SVHN dataset as the validation set** to determine the most effective corruption types for each method in all experiments. Specific examples of selections are provided in **Table 7 in attached PDF. We will clearly explain the usage of the validation set in our main paper**. We would like to explain that our submitted code is developed from the MCM code repository, and methods such as MCM and NegLabel did not explicitly employ a validation set in their implementation. Furthermore, the main function realization of the submitted code focuses on the critical implementation steps of CoVer for the usage, omitting the validation set part. **We will definitely add relevant code in the revised version and clearly explain it**.
Pdf: /pdf/44ad1f97defd8686db7ecca142eca6d9585f0d7f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Era3D: High-Resolution Multiview Diffusion using Efficient Row-wise Attention | Accept (poster) | Summary: This paper develops a new image-to-multiview diffusion model with two key highlights. First, it proposes a novel method for estimating the focal length and elevation. Second, it introduces a new cross-view attention mechanism. Experiments demonstrate that the method outperforms previous SOTAs.
Strengths: Previous image-to-3D methods struggle when the object in the image is distorted due to a small focal length. Therefore, this paper focuses on a practical problem. Given the development of previous image-to-multiview methods, it is common to apply a cross-view attention mechanism to address the multiview consistency problem. This paper reconsiders the shortcomings of previous cross-view attention methods and proposes a better solution.
Weaknesses: My main concern is with the presentation of experiments. As mentioned in L136, the motivation is to improve performance on real-world data, where small focal-length cases exist. Fig. 2 shows a good example. Before reading the experiments, I expected to see more results like Fig. 2. However, I think only the toaster example in Fig. 5 demonstrates the superiority of Era3D in this regard. The main paper should present more results.
By comparison, the results in Fig. 6 seem more focused on demonstrating the generalization ability of Era3D and how other methods suffer from out-of-distribution issues. Although I understand that the robust generalization ability of Era3D might be due to EFReg, given that other methods are also trained on Objaverse, I think this presentation is not explicit. In particular, the focal length of the images output by SDXL cannot be controlled, so it is unclear whether Era3D's better performance is due to EFReg or better preprocessing of the training data.
Moreover, I disagree with the claim in L97 that Era3D is the first method to solve distortion artifacts. For example, "Shape, Pose, and Appearance from a Single Image via Bootstrapped Radiance Field Inversion" (CVPR23) also explores this issue by training directly on Pscal3D+, a real-world dataset with varying focal lengths. VQA-Diff (ECCV24) explores a similar problem and proposes to utilize prior knowledge of LLMs. Despite this, I find the proposed EFReg to be an interesting and novel solution.
Technical Quality: 3
Clarity: 2
Questions for Authors: (1) I think Wonder3D is a highly related paper, as both deal with the image-to-multiview problem. Given that the authors claim one of Era3D's superiorities is its high resolution, I'm wondering why they did not use PSNR and SSIM to evaluate the synthetic novel views as Wonder3D did.
(2) As explained by the authors in L207, the RMA is based on a simplification. Table 3 shows the benefits of RMA. However, my intuition is that RMA sacrifices quality to some degree compared to Dense and Epipolar. I did not find an ablation study regarding this in the main paper or supplementary material. I think the authors should demonstrate that the quality of RMA, Dense/Epipolar are comparable. For example, if the quality deterioration of RMA exists, it should be shown to be acceptable. Otherwise, the authors should demonstrate that RMA outperforms Dense and Epipolar in terms of quality and explain the reason.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: This paper addresses the issue of varying focal lengths in real-world data, which is a good motivation in my view. However, I do not think Era3D fully closed this problem, given that only a limited number of focal lengths are considered when preparing the training data. The authors should include this as a limitation or discuss it as a possible future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable time and insightful comments! We have tried to address your concerns in the updated manuscript and our rebuttal text:
**Q1: More results of perspective input.**
We appreciate this suggestion and have accordingly expanded our results section. In Figure 2 of the global response, we include more results on GSO datasets and cases from the Internet. Compared with Wonder3D and more recent Unique3D, our method demonstrates superior performance in mitigating distortion and reconstructing plausible 3D shapes. As per your recommendation, we will incorporate more results into the main paper.
**Q2: The presentation about EFReg is explicit. Is the better performance from EFReg or preprocessing of training data?**
In Fig.7 of the main paper, we evaluate the effectiveness via qualitative comparison by removing EFReg. Notably, without EFReg, the resulting shape is distorted and
fails to generate reasonable novel views in the canonical camera setting. To provide quantitative support for these observations, we additionally report the Chamfer Distance in Tab.1. We use orthogonal renderings at an elevation of 0° as the reference (last column) and vary the elevation from −10° to 40° and select focal lengths from {35, 50, 85, 105, 135, ∞} to assess the system’s robustness to distortions. These results consistently confirm that EFReg significantly contributes to robust pose estimation and enhances overall reconstruction accuracy.
**Table 1**: Ablation of EFReg on GSO datasets with various elevation (α) and focal lengths (f). We report the Chamfer Distance (↓).
| Pose | α=0 | | | | | f=∞ | | | | | α=0, f=∞ |
|------|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|------------|
| | f=35 | f=50 | f=85 | f=105 | f=135 | α=-10 | α=10 | α=20 | α=30 | α=40 | |
| w/o EFReg | 0.0237 | 0.0233 | 0.020 | 0.022 | 0.0217 | 0.0221 | 0.0217 | 0.0225 | 0.0231 | 0.0228 | 0.0217 |
| w EFReg | 0.0223 | 0.0219 | 0.0216 | 0.0214 | 0.0214 | 0.0217 | 0.0216 | 0.0216 | 0.0219 | 0.0217 | 0.0213 |
**Q3: Missing metrics of PSNR and SSIM.**
We apologize for the missing reports of PSNR and SSIM. Following your valuable suggestion, we have now incorporated these results in Tab.2 to provide a more comprehensive evaluation following your suggestion. Our method significantly outperforms others, consistent with our performance on other geometry metrics.
**Table 2:** Quantitative evaluation of SSIM and PSNR.
| Method | RealFusion | Zero-1-to-3 | SyncDreamer | Wonder3D | Ours |
|--------|------------|-------------|-------------|----------|------|
| SSIM(↑) | 0.722 | 0.779 | 0.798 | 0.811 | **0.837** |
| PSNR(↑) | 15.26 | 18.93 | 20.05 | 20.83 | **22.74** |
**Q4: Inaccurate claim of 'the first method to solve perspective distortion'.**
We acknowledge that our initial statement requires refinement. The study in CVPR'23 attempts to predict the distortion in category-specific image reconstruction. VQA-Diff only explores vehicle distortion in Autonomous Driving. In contrast to them, our study first considers distortion artifacts for general 3D object generation. We will revise the statement in the manuscript.
**Q5: Does RMA lead to performance degradation compared with dense/epipolar MV attention?**
Theoretically, row-wise, epipolar, and dense MV attentions are equivalent in our orthogonal setup. However, both dense MV attention and epipolar attention consume a large amount of GPU memory in the training, which cannot scale up to high resolution 512$\times$512 and largely reduces the training batch size. As observed by Zero123 and other works, a large training batch size is very important for training a high-quality diffusion model. Due to the memory limitation, we cannot conduct a fair comparison experiment to train the model with dense MV attention or epipolar line attention.
**Q6: Era3D does not fully address the distortion issue. It should be included as a limitation.**
Thanks for your kind reminder. We use the commonly used focus lengths for training Era3D, which significantly mitigates perspective distortions. We acknowledge that our work offers a promising avenue for addressing this challenge rather than fully resolving the issue. We will clarify this point and discuss it in the limitation section.
---
Rebuttal Comment 1.1:
Title: response to rebuttal
Comment: The authors addressed most of my concerns. However, I believe there is still a limitation/weakness remaining.
[Strength and Suggestions]
Fig. 2 looks good, and I think it provides a more meaningful comparison than the current Figs. 5 and 6 in the main paper. I highly recommend moving Fig. 6 to the supplementary material, as it does not seem directly related to the main topic of this paper. If this paper were about a robust and powerful zero-shot generative model, then showing out-of-distribution cases (Fig. 6) of other methods would be reasonable. However, please note that the focus of this paper is on addressing the challenging short-focal length cases.
Q2 and Q4 refer to a similar issue. I noticed that you did not include another important baseline in Table I of the rebuttal: w/o EFReg and w/o various focal length data in training. The reason I mentioned NFI and VQA-Diff is that I believe these studies have shown that adding various focal-length data during training can improve performance when dealing with short-focal-length scenarios. As they are prior works, I think it is necessary to demonstrate that Era3D processed one more step upon this. By doing so, we can see the necessity of EFReg more explicitly. The author should add this row to Table I and include the discussion in the main paper.
[Weakness]
I still have an issue with the response to Q5. Consider the classic MobileNet. When it was first proposed, do you think readers would have accepted it if the authors had only demonstrated that MobileNet requires fewer FLOPs and has a shorter inference time? Demonstrating performance is always important because there is usually a trade-off between performance and efficiency (though it is definitely desirable to improve both simultaneously).
I understand that the resolution issue prevents the authors from comparing the methods using the current setup. However, I believe that for this type of comparison, it is acceptable to adjust the setup as long as the comparison remains fair. Specifically, the authors could train the models with a smaller resolution (perhaps on a smaller dataset as well, if it doesn’t compromise the results). In summary, fairness is the only consideration in this test, and for a top-tier conference, a thorough experiment is necessary.
I will increase my rating if the authors can address the weakness mentioned in this comment. For now, I will keep it at 6.
---
Reply to Comment 1.1.1:
Comment: Thanks for your valuable comments and suggestions.
Regarding the suggestion of 'w/o EFReg and w/o various focal length data in training', we believe Wonder3D provides a good baseline in the main paper. They neither consider perspective distortion nor employ any specific dataset strategy.
Following your comment on Q5, we recognize the importance of conducting performance comparisons between dense, epipolar, and our row-wise attention. Considering that in our orthogonal setup with the same elevation, epipolar attention is equivalent to row-wise attention (except for implementation differences), we compared dense attention and row-wise attention at a resolution of 256. We used the full 80,000 objects mentioned in the main paper for training. The training was conducted using 8 NVIDIA H800 GPUs, with a batch size of 128 for a total of 30,000 iterations. Each experiment took approximately 22 hours.
The results reported in Table 3 demonstrate that row-wise attention can achieve comparable performance to dense attention. For the Chamfer distance, LPIPS, and PSNR metrics, our row-wise setting even outperforms the baseline. We attribute this to row-wise attention reducing the number of attention tokens, allowing the model to focus more on valuable tokens.
**Table 3**: Performance of dense and row-wise attention at a resolution of 256.
| Method | CD ↓ | IoU ↑ | LPIPS ↓ | PSNR ↑ | SSIM ↑ |
|----------|--------|--------|-------|-------|-------|
| dense | 0.0239 | **0.5877** | 0.140 | 20.73 | **0.819** |
| rowwise | **0.0232** | 0.5831 | **0.137** | **20.92** | 0.813 |
We hope these findings address your concerns. If you have any further thoughts, we welcome active discussion and are committed to refining our paper accordingly.
Thanks for your time and positive reviews for our work. | Summary: The authors proposed a method that can estimate the camera intrinsic matrix of the render of a given object, which attempted to solve the problem of other image-to-3d methods that only trained on fixed camera intrinsic matrix.
Strengths: The proposed method is the first work that considers the change of the camera intrinsic matrixes for the image-to-3D generation, and proposed an attention strategy to reduce the computational cost.
Weaknesses: 1. I do not think row-wise attention should be a contribution as it is a special case of the epipolar attention from another paper.
2. Lacks the comparison with SV3D which is released before the deadline of the submission and can generate high resolution videos. Although Unique3D is released after the deadline, but it would be better if authors can also optionally include the comparison with it.
3. There is only a demo video in the supplementary material. If authors want refer reviewers to the arxiv version, it actually violates the double-blind policy.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Authors mentioned that they used the feature maps of middle-level transformer block for intrinsic estimation. Can you specify it? Which level did you use? How many levels did you use? Any experiments about the impact of different levels?
2. Is it possible to directly generate images under perspective cameras?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: 1. I am not sure the novelty of the paper is enough for publication. The only contribution that I can tell is the camera intrinsics estimation. Row-wise attention is borrowed from other works. The authors mentioned that the proposed work can generate high resolution image, but sv3d can also do this.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable time and insightful comments! We have tried to address your concerns in the updated manuscript and our rebuttal text:
**Q1: Row-wise attention is just epipolar attention and is not the contribution of this paper.**
We respectfully disagree with this characterization. As discussed in Line290 of the submission, the vanilla Eipolar attention adopted by previous methods significantly increases memory and time consumption due to their need for sampling points along the epipolar lines, which makes them even slower than the dense multiview attention. Era3D is the first attempt to simplify epipolar attention, which makes it extremely efficient for generating high-quality 3D objects. Thus, we consider it a core contribution of our work.
**Q2: Lack of comparison with SV3D and Unique3D.**
While we acknowledge the importance of providing comprehensive comparisons, Unique3D and SV3D are unpublished technical reports on arXiv. Both of them are concurrent works that released their codes near or after our submission. Despite it, we provide an additional comparison with Unique3D in Fig.2 of global response, which shows that Unique3D still suffers from perspective distortion while our method greatly alleviates this problem. We appreciate the reviewer's feedback regarding these comparisons.
**Q3: There are no other supplementary materials other than the video and response to the reviewer's comment regarding the violation of the double-blind policy.**
We respectfully disagree with the assessment. We strictly follow the NeurIPS 2024 guideline to attach the supplementary materials at the end of the main paper, including additional experiments and descriptions of our method. We also included an anonymous link at the end of the abstract to show our results without compromising the double-blind review process. We did not encourage or invite reviewers to search for our submission, and thus, we maintain that we have adhered to the double-blind reviewing policy as stipulated by the conference guidelines. We appreciate the reviewer's feedback and clarification on this matter.
**Q4: Selection of features for EFReg.**
We apologize for any lack of clarity in our presentation. For EFReg prediction, we utilize the final layer feature map of the intermediate UNet block with the lowest resolution since it could provide high-level global information.
**Q5: Can the proposed method directly generate perspective images?**
Our method is designed for 3D generation, which generates images around an object with orthogonal cameras. Thus, generating perspective images falls outside the scope of our paper. Moreover, generating perspective images will introduce additional challenges, such as varying scale prediction across multiple views, which would significantly increase the complexity of 3D generation. That's why existing works, including ours, focus on orthogonal setups rather than perspective ones.
**Q6: Not sure the novelty of the paper is enough for publication.**
As elaborated in Q1, our row-wise attention is distinct from the custom epipolar attention in previous studies. The proposed row-wise attention demonstrates a significant enhancement in training efficiency. SV3D is our concurrent work and requires considerably more resources for training. Therefore, we restate our contributions as follows:
1. **Era3D** is the first method that tries to solve the distortion artifacts in general 3D generation tasks;
2. We design the novel **EFReg** to enable diffusion models to take distorted images as inputs while outputting the orthogonal images on the canonical camera setting;
3. We propose row-wise multiview attention, an efficient attention layer for high-resolution multiview image generation;
We believe our designs could advance the field and inspire other 3D generation models.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: I appreciate the feedback from the authors. I still have some concerns as follows:
1. The author mentioned that "We leave the proof in the supplementary material.", but no such a thing.
2. How come the final layer of the UNet has the lowest resolution, shouldn't it be the highest resolution?
3. Can you please list the related methods that focus on orthogonal setups?
---
Rebuttal 2:
Comment: Thank you for your valuable feedback! We try to address your issues in the following discussion.
**Q1: Proof of Row-wise.**
Please refer to the 'A.2 Proof of Proposition 1' section (on Page 18) in our initial submission for the detailed proof.
**Q2: Selection of features for EFReg.**
As mentioned in response to Q4, we select 'the final layer feature map of **the intermediate UNet block**' rather than 'the final layer feature map of the whole UNet'. Therefore, the feature has the lowest resolution.
**Q3: Works on orthogonal setups.**
The works on orthogonal setups include but are not limited to:
+ (**ICLR'2024**) MVDream: Multi-view Diffusion for 3D Generation.
+ (**Arixv'2024**) Unique3D: Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image.
+ (**ECCV'2024**) CRM: Single Image to 3D Textured Mesh with Convolutional Reconstruction Model.
+ (**CVPR'2024**) EfficientDreamer: High-Fidelity and Robust 3D Creation via Orthogonal-view Diffusion Priors.
+ (**CVPR'2024**) Wonder3d: Single image to 3d using cross-domain diffusion.
We can further address unclear explanations and remaining concerns if any.
Once more, we appreciate the time and effort you've dedicated to our paper.
---
Rebuttal Comment 2.1:
Comment: I have adjusted my rate after further consideration.
---
Reply to Comment 2.1.1:
Comment: Thanks for your great efforts in reviewing our paper!! | Summary: This work introduces a novel take on multiview diffusion models, highlighting the potential to realize high-resolution images from one image. The method comes with a new design for the diffusion-based camera prediction module, focal length, and elevation of the input image elevation, together with row-wise attention in enforcing epipolar priors in the MV diffusion. The results show that the approach is very high in quality, detailed in 3D meshing ability, and multiview image generation with larger resolution ability while consuming much less computation compared to other presented approaches.
Strengths: **Clear Motivation and Innovative Module Design**
In this paper, the authors address several significant challenges associated with MV Diffusion in 3D content generation. These challenges include issues such as low resolution, inefficient generation processes, and inconsistent camera settings. For each of these problems, the authors propose novel designs aimed at providing effective solutions.
**State-of-the-Art Results**
The authors claim to have achieved state-of-the-art performance in single-view image generation tasks, as evidenced by the results in Table 1 and Table 2. However, due to the rapid advancements in 3D generation, their work does not compare their generation quality with recent methods such as LRM.
**Novel Contribution to 3D MV Diffusion Generation**
To the best of my knowledge, this paper is the first to address and propose solutions for the distortion problem specifically in 3D MV Diffusion generation.
Weaknesses: ### Missing Critical Details
The paper lacks several essential points, including optimization time for inference, details on hyperparameters, and robustness testing.
### Insufficient Ablation Study
The ablation study does not adequately justify the architectural design choices. More comprehensive experiments are needed to support these decisions.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The abstract (line 16) states, "Era3D generates high-quality... up to a 512×512 resolution while reducing computation complexity by **12x times**." I was really confused about this context. In the paper, the authors claim the design of row-wise multiview attention results in a 12-fold reduction in running time compared with Dense MV Attention. However, the author didn't show the complexity of other parts' time in their pipeline. Further Claims are needed.
2. Why did you choose viewpoints on an elevation of 0? Will it affect the generation for a specific side (e.g., bottom side)?
3. Have the authors tried to perform any sparse view reconstruction? And why you choose 6 views: {β, β + 45◦, β + 90◦, β − 45◦, β − 90◦, β + 180◦}? More ablation studies or claims may be needed.
4. I checked your demo video, but it only shows your result. Could you also compare it with other results in the video for better display?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent', 'Ethics review needed: Data quality and representativeness']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable time and insightful comments! We have tried to address your concerns in the updated manuscript and our rebuttal text:
**Q1: Comparison with LRM-based methods.**
Tab.1 showcases the comparisons of Era3D with OpenLRM and CRM on the GSO dataset, which shows that Era3D exhibits better performance than these two methods. Our method can be incorporated with LRM-based methods because recent LRM-based methods rely on multiview generation as the input while Era3D can provide generated multiview images.
**Table 1**: Quantitative comparison with LRM-based methods:
| Method | CD↓ | IoU↑ | LPIPS↓ | SSMI↑ | PSNR↑ |
|--------|-----|------|--------|-------|-------|
| openLRM | 0.0302 | 0.5243 | 0.158 | 0.738 | 19.03 |
| CRM | 0.0237 | 0.5693 | 0.146 | 0.803 | 20.78 |
| Ours | 0.0217 | 0.5973 | 0.126 | 0.873 | 22.74 |
**Q2: Missing details of inference time and hyperparameters.**
The whole process requires approximately 4 minutes, comprising 13 seconds for the multiview diffusion, 3 minutes for the NeuS reconstruction, and 10 seconds for the texture refinement. Our diffusion model employs a 6-layer MLP for pose estimation. We use the same hyperparameters as SD2.1, except what is explicitly stated in the 'Implementation Details' section. For the NeuS reconstruction, we use the same settings as Wonder3D. During texture refinement, we optimize the appearance for 200 iterations with a resolution of 1024 and a learning rate of 1e-3. We will add these details to the revision.
**Q3: Absence of robustness testing.**
We comprehensively evaluate the robustness of our pose prediction on the GSO dataset and the generation quality for in-the-wild objects in Tab.5, Tab.6, and Fig.11 of ' Supplementary Material'. We will also release our code for public testing and evaluation.
**Q4: Why select generation viewpoints of elevation 0? Would this affect the reconstruction of the bottom?**
In contrast to the setup with various elevations employed in Zero123++, Era3D is trained to generate six views at an elevation of 0. This choice is because row-wise attention requires a fixed elevation. Intuitively, models perceive an elevation of 0 more readily than a random elevation. While this setting may affect the objects with planar bottoms, such cases are uncommon. Note that the prior works such as SyncDreamer and Wonder3D also use similar pre-defined viewpoints.
**Q5: Why only generate 6 views?**
Generating dense views necessitates substantial memory for training. Fig.1 in global response illustrates sparse-view reconstruction results using 2, 4, and 6 views. Our method, utilizing 6 views, consistently produces complete and plausible 3D shapes.
**Q6: Insufficient ablation of EFReg.**
We utilize the intermediate feature map with the lowest resolution in UNet for EFReg prediction since it could provide high-level global information. The effectiveness of EFReg is evaluated qualitatively in Fig.7 of the main paper. To provide quantitative support, we additionally report the Chamfer Distance in Tab.2. These results confirm that EFReg facilitates robust pose estimation and enhances reconstruction accuracy.
**Table 2**: Ablation of EFReg on GSO datasets with various elevation (α) and focal lengths (f). We report the Chamfer Distance (↓).
| Pose | α=0 | | | | | f=∞ | | | | | α=0, f=∞ |
|------|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|------------|
| | f=35 | f=50 | f=85 | f=105 | f=135 | α=-10 | α=10 | α=20 | α=30 | α=40 | |
| w/o EFReg | 0.0237 | 0.0233 | 0.020 | 0.022 | 0.0217 | 0.0221 | 0.0217 | 0.0225 | 0.0231 | 0.0228 | 0.0217 |
| w EFReg | 0.0223 | 0.0219 | 0.0216 | 0.0214 | 0.0214 | 0.0217 | 0.0216 | 0.0216 | 0.0219 | 0.0217 | 0.0213 |
**Q7: Provide additional visual comparison with baselines in the demo video.**
We appreciate the reviewer's suggestion. We will incorporate comparisons in the demo video to provide a more comprehensive visual presentation of our method's performance relative to existing approaches.
**Q8: Confusing claim about computation complexity reduction of row-wise attention.**
Compared to Dense MV attention, our row-wise attention reduces the computation complexity by 12 times. We will clarify this in the Abstract. In Tab.3, we list the memory usage and running time of each part of our pipeline, in which other parts include self attn, cross attn, and feed-forward layers. Notably, Dense MV attention layers constitute approximately 60% of the memory footprint and 75% of the running time in the overall pipeline. Our row-wise attention substantially mitigates these computational demands, with particularly remarkable improvements in execution time.
**Table 3**: Memory usage and running time of the pipeline with 512 resolution and xFormer.
| | Memory usage (G) | | | Running time (ms) | | |
|----------|----------|----------|----------|----------|----------|----------|
| | MV Attn | Other parts | Total | MV Attn | Other parts | Total |
| Dense | 1.42 | ~1.0 | 2.40 | 22.96 | ~6.5 | 29.13 |
| Epipolar | 1.71 | ~1.0 | 2.81 | 20.03 | ~6.5 | 26.75 |
| Ours | 1.08 | ~1.0 | 2.09 | 1.86 | ~6.5 | 8.31 |
---
Rebuttal 2:
Title: Response to the rebuttal
Comment: I greatly appreciate the author’s detailed rebuttal, which effectively addressed nearly all of my concerns. However, based on my understanding of the field and feedback from another reviewer, I have some reservations about the paper’s novelty. Given the substantial body of existing literature in this area, I remain cautious about the contribution of this paper. Therefore, I will maintain my current score of 6.
---
Rebuttal Comment 2.1:
Comment: We sincerely appreciate your great efforts in reviewing this paper. Your constructive advice and valuable comments really help improve our paper. We will add corresponding discussions in the revision.
Once more, we are appreciated for the time and effort you've dedicated to our paper. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable comments. In summary, the reviewers are positive about the novelty, performance, and potential of our method: **"address several significant challenges associated with MV Diffusion"**(R-5Kpg), **"the first work that considers the change of the camera intrinsic matrixes for the image-to-3D generation"**(R-a9Xx) and **"be an interesting and novel solution"**(R-aLmz).
We include the necessary Figures and Tables in the attached PDF file. We respectfully direct the reviewers to the corresponding section for detailed response.
Pdf: /pdf/82067e8bb8b6cf771b3da7d60832516bd593b9a1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
When Your AIs Deceive You: Challenges of Partial Observability in Reinforcement Learning from Human Feedback | Accept (poster) | Summary: The authors present a current problem (research gap): if humans give model answer feedback in a partially observable environment, such feedback may lead to deceptive inflation and overjustification. For example, if humans purely depend on the error reporting result, this will encourage the model to hide the error intentionally, leading to deceptive behavior.
The authors then introduce the formalization of partial-observable RLHF, and formalize Deceptive Inflation and Overjustification. For example, turning human beliefs into a matrix B, and replace Return(states) with B$\cdot$ Return(observation). After that, authors define the concept of ambiguity, a linear subspace. If the distance between human's return function and true return function fall into such subspace, the feedback will be the same.
Strengths: [1]The proposed research question is practical, interesting, and in an early-investigated stage. The authors formalize this question for human feedback under partial observability.
[2] The authors provide several informative figures, which should be encouraged.
Weaknesses: [1] No experimental results and metrics
One main contribution of this paper is the proposed research question and its formalization. Given the formalization, the reader would expect the authors to (1)explain the question within the formalization, (2)give an experiment to empirically reveal the question or justify the proposed solution, and (3)give metrics to judge the effectiveness of methods that aim to solve this question. And the paper only did (1).
Though this paper gives proof of specific propositions, it's not reasonable for a formalization lacking (2) or (3).
[2] Writing
Despite the clear figures, some logic does not match. The purposes stated ahead of sub-sections are lost, for example, due to the lack of translation from proven Propositions to the answers to proposed questions.
Technical Quality: 2
Clarity: 2
Questions for Authors: The paper presents a formalization of Deceptive Inflation and Overjustification and mentions some modifications to current RLHF methods. Even with proof, the result may not stand without experiments. Thus this paper will grow much stronger if it can be supported from experiments.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors admitted that some of the assumptions that this method is based on may not hold, thus need further improvement.
Beyond that, authors may consider creating some metrics for deceptive inflation and overjustification, and then justifying them. In this way, this work could serve as a stepping stone for the community to follow.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review of our work!
> Given the formalization, the reader would expect the authors to (1)explain the question within the formalization, (2)give an experiment to empirically reveal the question or justify the proposed solution, and (3)give metrics to judge the effectiveness of methods that aim to solve this question. And the paper only did (1). Though this paper gives proof of specific propositions, it's not reasonable for a formalization lacking (2) or (3).
Our work is indeed theoretical. For our general philosophy justifying this choice, we refer to [our global rebuttal](https://openreview.net/forum?id=XcbgkjWSJ7¬eId=dwv3iH9Ccw).
With respect to empirical justifications of proposed solutions, we want to additionally highlight that in Section 6, we sketch the beginnings of a method that is *theoretically justified*. In particular, if $B$ is known, then in Proposition C.5 we prove that there is a differentiable loss function whose minimum contains only feedback-compatible reward functions, which are safe if the ambiguity $\ker(B \circ \Gamma)$ disappears.
> Despite the clear figures, some logic does not match. The purposes stated ahead of sub-sections are lost, for example, due to the lack of translation from proven Propositions to the answers to proposed questions.
Thank you for this comment. We will extend the first paragraph in Section 4 to foreshadow Proposition 4.1 in Section 4.1, which, so far, was unmentioned in the intro of Section 4. Our new intro paragraph to Sec. 4 will read:
“*We now analyze failure modes of a naive application of RLHF from partial observations, both theoretically and with examples. In Proposition 4.1, we show that under partial observations, RLHF incentives policies that maximize what we call $J_\text{obs}$, a policy evaluation function that evaluates how good the state sequences “look to the human”. The resulting policies can show two distinct failure modes that we formally define and call deceptive inflation and overjustification. In Theorem 4.5 we prove that at least one of them is present for $J_\text{obs}$-maximizing policies. Later, in Sections 5 and 6, we will see that an adaptation of the usual RLHF process might sometimes be able to avoid these problems.*”
Furthermore, we think that the start of Section 5 might be confusing in its current form, where we ask “Assuming the human’s partial observability is known, could one do better?”. This question is only directly addressed in Section 6. To improve the logic, we will make the following changes:
We will merge Sections 5 and 6 (i.e., Section 6 becomes the new Section 5.3), and change the second paragraph of the intro to Section 5 as follows:
“*We start in Section 5.1 by analyzing how much information the feedback process provides about the return function when the human’s choice model under partial observations is known precisely. We show that the feedback determines the correct return function up to an additive constant and a linear subspace we call the ambiguity (see Theorem 5.2). If the human had a return function that differed from the true return function by an element in the ambiguity, they would give the exact same feedback — such return functions are thus feedback-compatible. In Section 5.2, we show an example where the ambiguity vanishes, and another where it doesn’t, leading to feedback-compatible return functions that have optimal policies with high regret under the true return function. Finally, in Section 5.3 we explore how one could in theory use Theorem 5.2 as a starting point to design reward learning techniques that work under partial observability.*”
Please let us know if you see further problems in the clarity of our writing, and we are happy to address them!
> The authors admitted that some of the assumptions that this method is based on may not hold, thus need further improvement.
It is true that some assumptions on the human model may not hold. Nevertheless, see [our answer to reviewer LhS3](https://openreview.net/forum?id=XcbgkjWSJ7¬eId=FWxGuq61x2): we think a more realistic feature-map formulation of the human model (replacing the belief $B$) would effortlessly connect with our formalization. Thus, we think there is great potential for our formalization to remain relevant even when taking into account more realistic human models. We will add a discussion on this viewpoint in the final version.
Additionally, it is a priori our belief that the results in Section 4 are robust to more realistic human models, since they are about failure modes that we show to be present even when the human has excellent rational expected-value thinking.
Please let us know about any remaining concerns, and we are happy to discuss them.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and clarification! The Global Rebuttal mitigates my major concern, and I would raise my score to a positive one. | Summary: This work studies the impact of partial observability in RLHF. Two failure cases are defined: deceptive inflation and overjustification, and under them, concrete conditions are provided to impact the learned policy. Moreover, the ambiguity induced by the partial observability is further measured that the feedback determines the correct return function up to an additive constant and a linear subspace, which does not vanishes in many circumstances. Finally, it is recommended that recommended performing exploratory research on RLHF for cases when partial observability is unavoidable.
Strengths: - This work targets at one important issue in the current RLHF learning pipeline that the human annotators may not observe the full information of the model's generation processes. Taking such scenarios in consideration is vital to keep the trained models trustworthy. I appreciate the authors' efforts in this direction.
- One theoretical formulation is proposed to model how human deals with partial observability (i.e., through the belief matrix). Under this formulation, detailed discussions are provided on what kinds of conditions would lead to impacted policies and how much ambiguity is introduced. The cases of Deceptive Inflation and Overjustification are well aligned with many practical scenarios in my mind. And, the discussed ambiguity is enlightening, also highlighting the need to treat partial observability carefully.
- The overall presentation is very clear, with key messages illustrated both rigorously through theorems and intuitively through examples/figures. I enjoyed reading this work.
Weaknesses: I am overall satisfied by this work. The followings are a few points that would be interesting if further discussed.
- This work discusses the partial observability mostly in the theoretical sense, with a few hypothetical examples. It would be nice to see or verify the impact of partial observability in real experiments. I understand this would require a large amount of further efforts, while encouraging the authors to contain some discussions (e.g., on experimental designs) to enlighten future works.
- As also mentioned in the limitations listed in Section 7, the formulation of the belief matrix is a bit strong in my mind. I fully understand that it is adopted to deliver the message of treating partial observability carefully, while also encouraging the authors to include some discussions to improve such a formulation.
Technical Quality: 3
Clarity: 3
Questions for Authors: I would love to hear the authors' opinions on the points listed in weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review!
> This work discusses the partial observability mostly in the theoretical sense, with a few hypothetical examples. It would be nice to see or verify the impact of partial observability in real experiments.
Our work is indeed theoretical. For our general philosophy, we refer to [our global rebuttal](https://openreview.net/forum?id=XcbgkjWSJ7¬eId=dwv3iH9Ccw).
## Feature count formulation of the human model
> As also mentioned in the limitations listed in Section 7, the formulation of the belief matrix is a bit strong in my mind.
This is indeed appears to be a strong assumption. We think further work is needed to evaluate the strengths and shortcomings of this formalization. We want to explain one implicit viewpoint of ours that we think means this formalization might reach quite far.
Namely, we think it is compatible with how humans intuitively form judgments over events.
Concretely, if you are a human tasked with judging the quality of an observation sequence $\vec{o}$, what you will do is admittedly **not** to compute an explicit posterior belief $B(\vec{s} \mid \vec{o})$ over $\vec{s}$ – indeed, if the environment is very complex, it may already be impossible to even think about entire state sequences $\vec{s}$. More likely, you would think about the presence of certain features in the state sequence; e.g., “how often was there a bug”, or “has the software been installed”, or “how efficient is the code”. One would then implicitly assign a reward to each such feature and then add up the rewards.
This feature viewpoint could a priori be compatible with our formulation, namely if the feature-based returns implicitly come from a belief over state sequences. This could work as follows: Assume there is a feature map $\phi(s)$ that maps high-dimensional states $s$ to low-dimensional feature vectors $f$. Assume the reward only depends on feature vectors: $R(s) = R’(\phi(s))$ for some function $R’$. We obtain:
$$\sum_{\vec{s}} B(\vec{s} \mid \vec{o}) G(\vec{s}) = \sum_{\vec{s}} B(\vec{s} \mid \vec{o}) \sum_{t = 0}^{T} \gamma^t R(s_t) = \sum_{f} \left( \sum_{\vec{s}} B(\vec{s} \mid \vec{o}) \sum_{t = 0}^{T} \gamma^t \delta_f(\phi(s_t)) \right) R’(f) \eqqcolon \sum_{f} N(f \mid \vec{o}) R'(f),$$
Where the outer sum runs over possible features, and where $\delta_f(\phi(s_t))$ evaluates to $1$ if and only if $\phi(s_t) = f$. In the last step, we denoted the big coefficient of $R’(f)$ by $N(f \mid \vec{o})$, which is the expected discounted feature-vector count of $f$ upon observing the whole sequence $\vec{o}$.
Then instead of modeling the human as coming with a reward function $R$ and a belief matrix $B$, we have found an alternative model: we can model the human as coming with a feature-based reward function $R’$ and an expected feature-count $N(f \mid \vec{o})$. This highlights that the whole formalism could also be built upon a human model where the human “counts the presence of features”, as long as these feature counts come with consistency properties that allow them to be compatible with a belief over state sequences. We think this viewpoint makes it natural to extend our work to more realistic human models that effortlessly connect with our work.
We will add this additional discussion to our paper to explain the reach of our belief matrix formulation.
Please let us know if you have remaining concerns, and we are happy to address them.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response! It helps me maintain a positive opinion on this work. | Summary: The paper discusses the challenges that arise when human annotations for RLHF are based on partial observations. They formally define deceptive inflation and overjustification as failure cases caused by partial observation, and theoretically prove that standard RLHF is guaranteed to result in either or both of them. They further analyze how much information the feedback process provides about the return function assuming that human’s partial observability is known and accounted for.
Strengths: - The research question proposed in this paper is of significant value. The authors provide a clear formal definition of deceptive inflation and overjustification by introducing policy evaluation function and over-/underestimation error.
- The paper provides detailed and solid proof over their claims, not only discussing the limitations of standard methods but also showing whether the model can perform better when the human’s partial observability is known. The authors also use many examples and counterexamples to analyze and explain the proposed claims.
Weaknesses: - The proofs are based on assumptions of a specific MDP structure and a particular human belief function, which might not be easily generalized to realistic, complex environments.
- No empirical evidence is included in this paper, though real-world examples are given.
(I am not quite familiar with relevant topics and am unable to assess this part.)
Technical Quality: 3
Clarity: 3
Questions for Authors: Refer to the above comments.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper discusses some limitations and future work. In addition to the aspects mentioned by the authors, I am also looking forward to seeing empirical evidences regarding this topic.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
> The proofs are based on assumptions of a specific MDP structure and a particular human belief function, which might not be easily generalized to realistic, complex environments.
**It is not true that we assume a specific MDP structure.** Our work applies to any MDP, encompassing all reinforcement learning problems. **Our human belief function is also assumed to be general**: The only restriction is that it sums to 1 over all state-sequences, for a given observation sequence. Future work could think about relaxing the human belief function to allow for "feature maps" since humans cannot think about the whole environment state of complex environments. We detail this “feature map” idea [in our answer to reviewer LhS3](https://openreview.net/forum?id=XcbgkjWSJ7¬eId=FWxGuq61x2).
> No empirical evidence is included in this paper, though real-world examples are given.
Our work is indeed theoretical. For our general philosophy, we refer to [our global rebuttal](https://openreview.net/forum?id=XcbgkjWSJ7¬eId=dwv3iH9Ccw).
Please let us know if you have remaining concerns. We will be happy to address them!
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and clarification! I would maintain the current score. Thanks! | Summary: The paper addresses the problem of accounting for humans having only partial observability of their environment when providing feedback. They outline two natural issues that can arise from such partial observability - deceptive inflation and overjustification - and provide examples of both. They then explore what is possible if the human's partial observability is fully known and accounted for, specifically asking how much about the return function can be recovered from the feedback. They show that in many realistic cases, there is an irreducible ambiguity (formalized as a vector space) in determining the return function. Finally, they propose ideas for combating these issues with a small proof-of-concept theorem backing their ideas.
Strengths: 1. The paper tackles a new facet of partial observability in human feedback - humans not observing the entire state.
2. They formalize the intuitive issue of deceptive inflation, and also introduce the counterintuitive but very plausible idea of overjustification.
3. Their theoretical framework is very clean and allows them to completely characterize the ambiguity in obtaining the return function.
4. The study of examples in the appendix is very comprehensive.
Weaknesses: 1. While the problems outlined are illustrated using hypothetical examples, I think the paper would greatly benefit from carefully designed experiments that unambiguously demonstrate the existence of such problems in practice.
2. The paper does not propose any concrete solutions to the problem. While this is great work exploring the problem, I feel that that does not rise to the level of a NeurIPS paper. This would make an excellent workshop paper that can grow into a full conference paper after adding some concrete attempts at solving this problem, either by presenting algorithms with new theoretical guarantees in this context, practical experiments showing a quantifiable improvement in meaninigful metrics associated to the problem, or both.
3. Minor and easy to rectify: The paper does not discuss other work in RLHF that deals with partial observability and heterogeneity (albeit in a subtly different context), for example [1, 2, 3].
All in all, this paper is a great start, but needs more work to become a full-fledged conference paper. I am willing to reconsider my score if either 1 or 2 are provided.
Refs:
1. Direct Preference Optimization With Unobserved Preference Heterogeneity. Chidambaram et al, 2024.
2. A Theoretical Framework for Partially Observed Reward-States in RLHF. Kausik et al, 2024.
3. RLHF from Heterogeneous Feedback via Personalization and Preference Aggregation. Park et al, 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Are there easy ways to force the ML model to"expose the underlying state a bit more" from time to time? A naive example is forcing verbosity in the case of the deceptive inflation example, but of course one needs to design a more general and sustainable fix.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review!
>The paper would greatly benefit from carefully designed experiments that unambiguously demonstrate the existence of such problems in practice.
Our work is theoretical and we advocate for judging it on a theoretical standard. We explain our viewpoint in more detail in our [global rebuttal](https://openreview.net/forum?id=XcbgkjWSJ7¬eId=dwv3iH9Ccw).
> The paper does not propose any concrete solutions to the problem [...] adding some concrete attempts at solving this problem, either by presenting algorithms with **new theoretical guarantees** [emphasis ours] in this context, practical experiments showing a quantifiable improvement in meaninigful metrics associated to the problem, or both.
We want to highlight that Section 6 presents exploratory ideas for practical solutions, and in particular, that the discussed Appendix C.3 with Proposition C.5 provides **a first theoretical guarantee**: If the belief matrix B is known explicitly, one can design a differentiable loss function whose minima are feedback-compatible. In particular, this proves that if the ambiguity $\text{ker}(\text{B} \circ \Gamma)$ vanishes, then the minima of the loss function are guaranteed to be safe, in the sense that they have return functions that differ up to an additive constant from the true return function. Thus, future work has a theoretical guide for designing practical algorithms, which requires carefully designing the type of partial observability so that the ambiguity is “benign”, and potentially specifying an approximation of the belief matrix B.
> The paper does not discuss other work in RLHF that deals with partial observability and heterogeneity (albeit in a subtly different context), for example [1, 2, 3].
Thank you for making us aware of these recent works. We will add the following paragraph to the related work:
“*The literature also discusses other cases of partial observability. [1] and [3] deal with the situation that different human evaluators can have different unobserved preference types. In contrast, we assume a single human evaluator with fixed reward function, which can be motivated by cases where the human choices are guided by a behavior policy, similar to a [constitution](https://www.anthropic.com/news/claudes-constitution) or a [model spec](https://cdn.openai.com/spec/model-spec-2024-05-08.html). [2] assumes that the choices of the human evaluator depend on an unobserved reward-state with its own transition dynamics, similar to an emotional state in a real human. In contrast, we assume the human to be stateless.*”
We also add paragraphs detailing related work on the modeling choices for human preference selection, truthful AI, and other works that deal with missing information. Please let us know if you would like to see these paragraphs, then we will promptly add them in a subsequent answer.
> Are there easy ways to force the ML model to"expose the underlying state a bit more" from time to time? A naive example is forcing verbosity in the case of the deceptive inflation example, but of course one needs to design a more general and sustainable fix.
We think about this in terms of trade-offs. *In principle*, the environment state could be fully observed by the human: We could just send all the information that the AI receives also to the human evaluators. The reason why we still think our work is necessary is that we expect it to increasingly become prohibitively expensive to show human evaluators all content; after all, human evaluators have limited time, and so if they always fully observe a state sequence, that trades off against the number of labels they can provide within a given timeframe.
Thus, the question becomes how to *design* the partial observability in exactly such a way that despite limited information, the feedback-process still leads to correct reward functions being learned. Your idea to “Expose the state from time to time” (e.g. randomly) is interesting, and we are indeed interested in further research that deeply analyzes such settings.
Please let us know if you have remaining concerns. We will be happy to discuss them.
---
Rebuttal 2:
Title: Response to rebuttal
Comment: While I appreciate the response and acknowledge that I am no stranger to an emphasis on theory, I continue to have qualms.
I believe that there is a difference between a theoretical work that introduces a new model and theoretical work that identifies a problem with existing work. Since your work falls into the latter, it needs more than just hypothetical justification to demonstrate the severity and impact of the problem. The experiments demonstrating the existence of this problem need not be your own, but they need to exist and be referred to.
An example I would like to give is the work of Alon and Yahav introducing oversquashing in GNNs. Their paper lacked substantive theory, but it empirically demonstrated the existence of the problem they were postulating. I think if we identify a potential problem in our minds, it is our duty to make sure that we haven’t made it up.
Further, I apologize for not acknowledging the theoretical result they had. The reason it didn’t satisfy me was because assuming that the belief matrix is known is too strong an assumption. This made the result too weak for me to appreciate, despite it seeming technically non-trivial to show.
If either of these two concerns were addressed (strength of theoretical result or experiments demonstrating problem), then the other issue would be less major to me. Since neither has been addressed, I am afraid I cannot raise my score.
I must emphasize that this is an important problem and a paper with potential, it just needs to be strengthened on at least one of these ends. While I know that it can be tempting to work on exciting theory, in this case I implore you to design an experiment that can demonstrate the potential existence of this problem. Work like that of Alon and Yahav is in the unrelated domain of GNNs, but nevertheless it should help you understand the broad flavor of experiments that could help you convince readers.
---
Rebuttal Comment 2.1:
Title: A reference for empirical evidence of deceptive inflation
Comment: Thank you for continuing the discussion.
> The experiments demonstrating the existence of this problem need not be your own, but they need to exist and be referred to. [...] I think if we identify a potential problem in our minds, it is our duty to make sure that we haven’t made it up.
We think you raise valid points on the need for empirical validation of theoretical concerns. One early example for deceptive inflation is the robot hand from the blogpost accompanying an early RLHF paper [1]. Unfortunately, there are not many details on this example. Another very recent paper [2] provides more detailed evidence for deceptive inflation.
In detail: In [2], Section 3.3, there is a setup for a quintessential task involving deceptive inflation: A human user asks the model a question, and if it answers truthfully, it will get some low reward since the answer is undesirable to the user. If it simply answers with the desirable answer, it will also get a low reward. But if it answers in a desirable way and modifies a file in an unobserved way to make that answer appear to be true, then it gets a high reward. Figure 2 in [2] shows that this behavior can show up in a zero-shot way, and without being prompted to do so, after being trained on other tasks. Figure 13 shows that this behavior is successfully reinforced into the model with outcome-based PPO (with rewards based on observations, not full states) even when there is additional “helpful, honest, harmless” training happening (which only slows down the manifestation of deceptively inflating behavior). Finally, Figure 2 shows that there is also a more serious deceptively inflating behavior that can appear zero-shot (namely reward-tampering), albeit at a much lower rate, and as explained in Section 3.4, the authors do not attempt to demonstrate that this behavior can be strengthened via RL.
To be clear, since Section 3.3 in [2] does not work with trajectory comparisons but directly looks at rewarding some unobserved behavior that produces favorable observations, it is mainly evidence that “maximizing $G_\text{obs}$” (in our language) can strengthen deceptive behavior (analogous to our Theorem 4.5). What’s still missing is evidence that such behavior also gets reinforced when there is an earlier reward modeling phase using trajectory comparisons. In other words, this paper does not provide direct empirical evidence for our Proposition 4.1.
Overall, we think it is laudable that the authors of [2] could show the zero-shot emergence and reinforcement of deceptive behavior under partial observability, and we think that was likely a significant challenge. For example, the authors write:
> *Models’ ability to exploit misspecified reward processes, and the ease with which they can represent a reward-seeking policy, will grow as models get more capable. Without countermeasures we should expect reward-seeking behavior to become more likely, but we are far from the point at which such models are an active risk.*
We think that we should be on the lookout for opportunities to empirically show realistic, dangerous behavior with models of increasing capabilities.
To be more concrete, we propose to discuss this work in the related work section with the following, more compressed paragraph:
*Our paper mainly provides theoretical evidence of failure modes of RLHF under partial observations. [2] provides first empirical evidence of deceptive inflation: a model zero-shot generalizes from more benign behavior to deceiving a synthetic human with unobserved actions that modify a file. This behavior is then subsequently reinforced in an RL stage by a reward function that does not “observe” the file tampering. The paper also shows that – very rarely – their model can zero-shot generalize from less serious behavior to outright unobserved reward-tampering. We are unaware of work showing empirical evidence for our second failure mode, overjustification.*
Additionally, we will mention in our conclusion that we welcome future work with further empirical investigations of failure modes under partial observability.
Please let us know whether this alleviates some of your concerns regarding empirical evidence of our proposed failure modes.
[1] Dario Amodei et al., Learning from human preferences, https://openai.com/index/learning-from-human-preferences/, 2017
[2] Carson Denison et al., [Sycophancy to Subterfuge: Investigating Reward-Tampering in Large Language Models](https://arxiv.org/abs/2406.10162v3), arxiv e-prints, 2024 | Rebuttal 1:
Rebuttal: # Global Rebuttal
In this global answer, we want to advocate for the inclusion of our purely theoretical work. The reviewers highlight the following (and more!) positive aspects of our work:
- *On the problem setting*: Our setting is “vital” (LhS3) and has “significant value” (zgLN).
- *On deceptive inflation/overjustification*: These are “well aligned with many practical scenarios” (LhS3).
- *On the ambiguity in the return function*: We “Completely characterize the ambiguity” (v92A), which is “enlightening” (LhS3).
- *On the theory*: “very clean” (v92A), with “detailed and solid proofs” (zgLN), and “rigorous [...] theorems” (LhS3).
- *On examples*: Our “many examples and counterexamples” (zgLN) are “very comprehensive” (v92A), and “intuitive” (LhS3).
- *On the presentation*: We include “several informative figures” (rT4E) and have an “overall presentation [that] is very clear” and that they “enjoyed to read” (LhS3).
On the other hand, all four reviewers highlight the lack of experiments and metrics to empirically validate the failure modes of deceptive inflation and overjustification. Reviewers v92A and rT4E furthermore highlight the lack of proposed (practical) solutions or metrics to judge progress toward a solution.[a] We overall agree that these are valuable goals to have, but consider them future work, for our work is meant to lay the conceptual and theoretical groundwork for studying RLHF under partial observability. Our goal was to clearly formalize a problem that has remained unformalized despite being informally known since the inception of RLHF through the [robot-hand](https://openai.com/index/learning-from-human-preferences/), and later the [ELK report by Christiano et al](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit#heading=h.kkaua0hwmp1d).
There is a large history of published work at NeurIPS, ICML and ICLR that is purely theoretical and comes without practical solutions. We highlight some that are related to our work: In [1], the authors theoretically analyze AI alignment in a setting in which the human’s reward function can change over time. In [2], it is shown that without knowledge of the algorithm with which humans choose their actions, one cannot learn their preferences. In [3], a theoretical umbrella of prior reward-learning work is built. In [4], the authors analyze what happens when the optimized reward function misses attributes important to humans. And in [5], the whole alignment problem is studied in a purely conceptual, pre-theoretic way. All these works are theoretical or conceptual, come without empirical investigations, and without new solutions to the proposed problems. **But crucially, many of these works are well-known and have guided the intuitions of researchers about the problems to tackle in our field.**
Given that similar work has been published before, it is our opinion that our theoretical work should be judged based on the quality of the theory we provide. With the large positive sentiment about all aspects of our work that we quoted above (on the problem setting, deceptive inflation/overjustification, the ambiguity in return functions, theoretical quality, examples, and presentation), we believe that we meet the standard for NeurIPS. We are happy to discuss further in the discussion phase, including in our answers to the individual reviews.
[1] Micah Carroll et al., *AI Alignment with Changing and Influenceable Reward Functions*, ICML, 2024
[2] Stuart Armstrong, Sören Mindermann, *Occam's razor is insufficient to infer the preferences of irrational agents*, NeurIPS, 2018
[3] Hong Jun Jeon et al., *Reward-rational (implicit) choice: A unifying formalism for reward learning*, NeurIPS, 2020
[4] Simon Zhuang, Dylan Hadfield-Menell, *Consequences of Misaligned AI*, NeurIPS, 2020
[5] Richard Ngo et al., *The Alignment Problem from a Deep Learning Perspective*, ICLR, 2024
[a]: While we do not have practical solutions, we **do** have theoretical beginnings of a solution. In particular, if B is known, then in Proposition C.5 we prove that there is a differentiable loss function whose minima contain only feedback-compatible reward functions, which are safe if the ambiguity $\text{ker}(\mathrm{B} \circ \Gamma)$ disappears. This is discussed in Section 6. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Pure Message Passing Can Estimate Common Neighbor for Link Prediction | Accept (poster) | Summary: This paper studies the problem of encoding pairwise structural features to enhance link prediction (LP). They note that previous literature has shown that standard MPNNs cannot encode the necessary pairwise information needed for LP. The authors argue that with careful design, MPNNs can in fact estimate pairwise structural features (e.g., common neighbors). This is done by equipping each node with quasi-orthogonal vector, which is used to estimate the counts of various structural features between two nodes. The authors then benchmark their method(s) on a number of datasets and compare against prominent benchmarks. They further highlight the efficiency of their method against other methods.
Strengths: 1. The paper is well-written and easy to understand. I think the motivation and the design of the method is explained quite clearly.
2. The problem itself is an important one. Recently, many methods have been proposed for estimating the pairwise structural information between nodes. However, there is a trade-off between expressivity and efficiency (e.g., SEAL is very expressive but slow while BUDDY or NCN is less expressive but much faster). Balancing these concerns is crucial for enhancing current LP methods as for current methods, those that are expressive tend to be computationally inefficient and vice-versa.
3. Overall, the performance is good. More so, when one considers the efficiency of MPLP+.
Weaknesses: Weakness 1:
----------------
My main concern is that I'm unsure why the performance of this method is this good. Let me explain. MPLP estimates the structural counts via the use of quasi-orthogonal vectors. The predictor (i.e., score function), then takes as input (a) the elementwise product of both node representations (outputted by a MPNN) and (b) the estimated structural counts which are of the form #(p, q) such that (1, 1) is CNs. I'm just restating the Eq. at the end of 4.2.
However, **how is this appreciatively different from BUDDY [16]?**. In the score function, they also include the elementwise product of both nodes in the target link. They then concatenate the same kind of structural counts considered by MPLP. They differ only in their method of estimating those counts. BUDDY (and ELPH) use subgraph sketching instead of orthogonal vectors. As such, **I can't seem to understand why MPLP/MPLP+ do so much better than BUDDY** (I know you report ELPH in your tables but it only does marginaly better than BUDDY). If the performance difference was small I wouldn't be as confused, by the gap on some datasets is enormous. For example BUDDY has a 49.85 Hits@100 on ogbl-PPA while MPLP+ is 65.24. A smaller, but still noticeable, gap is present on ogbl-Citation2 with BUDDY at 87.56 and MPLP+ at 90.72. My point is, the different in performance is non-trivial. The authors clearly report that their method can outperform BUDDY.
But as I noted, there doesn't seem to be anything specific reason why this is occurs, as they are estimating the same information. Is the approximation used by the authors better? If so, then why? The authors seems to argue that in Appendix F.1, where they compare the MSE of the label counts by ELPH/BUDDY and theirs. However, I have a few issues with the results: **(a)** The results aren't very clear to me. For example, is it on all test samples? Positive and negative? **(b)** Also the authors don't specify the # of hops used when estimating the counts for ELPH/BUDDY. From personal experience using their code, I find setting hops=3 can help improve the counts while still being fairly efficient. **(c)** **Most importantly, the authors only compare to MPLP and not MPLP+**. To me, this is a very important point as MPLP+ is itself a comprimise between efficiency and estimation quality. As shown in Table 2, MPLP is impractical on larger datasets as it's OOM. So why not compare the estimation quality of MPLP+, the model which still performs well but can scale to larger graphs? In my opinion it is very important that the authors include MPLP+ in the experiments in Appendix F.1.
I apologize for the wall of text, but the magnitude of increase over BUDDY seems very strange to me. To summarize, I suggest the authors: (a) Giving more explanation as to why the large discrepancy in performance may exist and (b) Include MPLP+ in Figures 8 and 9. I think if the authors can show that MPLP+ still does a much better job at estimating the structural features, that may be the answer. Otherwise, I very strongly recommend the authors try to give any difference that exist between the two methods that can explain it.
Weakness 2:
----------------
I find it odd that MPLP+ is more efficienct than BUDDY. MPLP+ and BUDDY both use L layers of message passings. Furthermore, as noted before, the scoring function is similar. The only difference is in estimating the structural counts. However, BUDDY pre-computes and caches the subgraph counts, so it is qctually quite efficiency as they don't need to be recomputed. This is as opposed to MPLP+ which does have to recompute them each time. Because of this I don't really see how BUDDY can be slower except for various differences in implementation. Because from the theoretical complexity I shouldn't expect BUDDY to be slower. Again, I recommend the authors try to explain why this might be the case.
Weakness 3:
----------------
I should have noticed this earlier, but the similarity between MPLP and BUDDY hurts the novelty of the paper. As far as I can see, the authors are proposing a new method that can potentially estimate the structural counts a little better and faster. While noteworthy, this is a small contribution. However, I personally only find this to be a minor weakness of the paper.
Weakness 4:
----------------
Some benchmark datasets are missing. Including Cora, Citeseer, and Pubmed (see [33]). I recommend including them. Also, for HeaRT, I recommend reporting the results on all datasets and not just ogbl-ppa/collab/citation2. To be clear, this is a very minor weakness.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Can you reproduce Figures 8 and 9 with MPLP+ against BUDDY/ELPH?
2. Can you better explain why MPLP+ is more efficienct than BUDDY?
3. Are there any other contributing factors that may lead to MPLP/MPLP+ doing better than a similar method like
Other:
--------
1. I recommend including the results of BUDDY in the main tables. I know you include ELPH and it typically does better, but it gets confusing when the results are for ELPH but the efficiency for BUDDY. I think it'll just make the experiments section easier to read.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We first want to appreciate the reviewer's extensive feedback, which is extremely valuable to us given the large volume of review loads this year. We also very appreciate the reviewer's positive feedback on the importance of the problem we address, especially acknowledging the importance of trade-off between expressivity and efficiency for link prediction tasks. We will address the following points in the rebuttal:
## W1: Comparison with ELPH/BUDDY
Due to the space limit, we post this rebuttal as the **"global" rebuttal** on the very top of the page. Please refer to the global rebuttal for the detailed response.
## W1 (cont.) Issues about Appendix F.1
We apologize for the any confusion in Appendix F.1. In Appendix F.1, we find that ELPH's MinHash/HyperLogLog can introduce a high variance of estimation compared to MPLP. We address the reviewer's concerns as follows:
### For example, is it on all test samples? Positive and negative?
We report the estimation quality on a set of both positive and negative edges. For positive edges, we use all the training edges. For negative edges, we randomly sample the same number of negative edges as the positive edges. We keep the edges consistent across MPLP and ELPH to ensure a fair comparison.
In MPLP, we use the shortcut removal technique to avoid the distribution shift problem. For a more fair comparison, we also implement the shortcut removal technique for ELPH in Appendix F.1. We will clarify this in the revised version.
### the authors don't specify the # of hops used when estimating the counts for ELPH/BUDDY.
For ELPH, we use the node signatures of 2-hop neighbors to get estimations for #(1,1), #(1,0), #(1,2), #(2,2) and #(2,0). Since the estimation variance becomes higher for both MPLP and ELPH at the #(2,2) and #(2,0), we did not report the variance beyond 2-hop neighbors. It is very interesting to learn from the reviewer that setting hops=3 can help improve the performance, where we thought that the variance would be too high to be useful. In our experiments, the performance gain from 3-hop neighbors of MPLP(+) is marginal.
### Most importantly, the authors only compare to MPLP and not MPLP+
MPLP+ is different from MPLP/ELPH/BUDDY in that MPLP+ estimates the number of walks between two nodes rather than the number of nodes like MPLP/ELPH/BUDDY. Therefore, to have an apple-to-apple comparison, we can only compare MPLP to ELPH in Appendix F.1 to evaluate the quality of estimating number of nodes.
## W2
We apologize for any confusion about the efficiency of MPLP+. MPLP+ is similar to BUDDY and can also cache the node signature during the inference time. In Appendix D.3, we discuss the detail of benchmarking the inference time for both BUDDY and MPLP+.
While theoretically, MPLP+ and BUDDY should have similar efficiency. However, the implementation of two methods causes the efficiency discrepancy. For MPLP(+), we utilize `pytorch_sparse` to implement the message-passing, which is built on [3]. BUDDY utilizes `pytorch_scatter` to implement its message-passing, which is both slower and more memory-hungry compared to the sparse-dense matrix (Spmm) opeartion in MPLP(+).
In fact, we also submit a Pull Request in BUDDY's Github repo to change its message-passing implementation to Spmm. Even though BUDDY with Spmm shows comparable efficency as MPLP+, it still exhibits higher estimation variance (Appendix F.1.) and lower overall performance (Table 1/2) compared to MPLP(+).
[3] Design Principles for Sparse Matrix Multiplication on the GPU.
## W3
Thank you for pointing this out. We discuss in the paper that our link prediction framework shares a similar spirit of learning structural representation as ELPH/BUDDY. However, beyond ELPH/BUDDY, MPLP(+) proposes several distinct components (Shortcut Removal, Norm Scaling, One-hot hubs, walk-based counting) that significantly boost the model performance with a totally different but simple node/walk-estimation mechanism (quasi-orthogonal vectors).
## W4
We appreciate the reviewer's constructive suggestion. We do not include the Cora/Citeseer/Pubmed in our main experiment because recent studies[4,5] and we find that these three datasets overwhelmingly rely on the node attributes for link prediction tasks. An MLP without graph structural information can already achieve comparable performance as GNNs on these three datasets. This makes them not sensitive enough between different link prediction methods.
For HeaRT setting, we will report the results on all datasets in a revised version.
[4] Linkless Link Prediction via Relational Distillation
[5] Evaluating Graph Neural Networks for Link Prediction: Current Pitfalls and New Benchmarking
## Q1
Please refer to rebuttal for W1 (cont.) Issues about Appendix F.1.
## Q2
Please refer to rebuttal for W2.
## Q3
Please refer to rebuttal for W1.
## Q4
We thank the reviewer for the suggestion. We will include BUDDY's results into the main tables to make it more clear.
---
**Thanks again for your comments and diligence in reviewing our work. We hope our responses have addressed your concerns. If so, we hope that you will consider raising your score. If there are any notable points unaddressed, please let us know and we will be happy to respond.**
---
Rebuttal 2:
Comment: Thanks for the detailed response. I list my further comments below.
| However, BUDDY cannot perform a weighted counting due to the mechanism of MinHash and HyperLogLog.
I agree with this in relation to BUDDY. But I'm skeptical that this is actually important. Of course RA/AA do better than just CN, but let's consider NCN. It doesn't consider the degree of the CNs and does very well. Of course one can argue that maybe the degree information get's encoded in the node representations in NCN but we don't really know since it's not explicit in any way. So I'm technically not disagreeing with your claim, but I'm unsure if the proof really exists to claim that this matters.
| ELPH/BUDDY can cause a distribution shift problem, while MPLP will not
I agree in theory, but in practice recent work has shown that the effect of this distribution shift is quite minimal. NCN/NCNC also removes the link during training. On the OGB datasets, their studies show the effect is minimal, at best (Note: The results are only in an older version of the paper [here](https://arxiv.org/pdf/2302.00890v2) in Table 3 under NoTLR). Also [1] below studies the problem and conclude that it really only effects nodes of lower degree. They show that the overall performance on ogbl-collab and ogbl-citation2 is barely changed when removing target links.
So again, I'm not disagreeing with you and I agree that this is a plus for MPLP/MPLP+. I'm just skeptical that it matters a lot in practice.
[1] Zhu, Jing, et al. "Pitfalls in link prediction with graph neural networks: Understanding the impact of target-link inclusion & better practices." Proceedings of the 17th ACM International Conference on Web Search and Data Mining. 2024.
| In our experiments, the performance gain from 3-hop neighbors of MPLP(+) is marginal.
Interesting. It's likely dataset dependent.
| While theoretically, MPLP+ and BUDDY should have similar efficiency. However, the implementation of two methods causes the efficiency discrepancy.
Ok, that makes sense. Please include a discussion of this when showing the efficiency results in the paper. As the current results are misleading.
| We do not include the Cora/Citeseer/Pubmed in our main experiment because recent studies[4,5] and we find that these three datasets overwhelmingly rely on the node attributes for link prediction tasks.
Ok, that's fair.
| However, beyond ELPH/BUDDY, MPLP(+) proposes several distinct components (Shortcut Removal, Norm Scaling, One-hot hubs, walk-based counting)
I thank the reviewers for the clarification.
However, I'm not disagreeing that the mechanism for estimating nodes/walk counts is different. That is a notable achievement. My point was regarding, as you mentioned, the "spirit" of the method being quite similar. That is still true. Furthermore, some of the components mentioned such as shortcut removal aren't proposed by this paper. In fact, it's used by multiple link prediction methods. This is also true for norm scaling where the authors explicitly refer to [12, 25] in their paper (line 246). To be clear, this isn't a bad thing, but I think the authors should be clear in their paper about what specifically they propose that's novel.
| MPLP+ estimates the number of walks between two nodes
Thanks for the clarification. I recommend the authors make this a little more explicit in their paper, as when introducing MPLP+ it's only mentioned briefly on line 294.
Furthermore, in my opinion, the fact that MPLP+ uses walk-based features raises 2 more questions:
Q1: How well does it estimate walk of disparate lengths? I recommend the authors try to incorporate this into future versions of their paper. Because as of now, we really don't know how well it actually estimates the walks. This matters because it will give us some understanding on whether estimating these counts is actually important and driving the performance gain.
Q2: It doesn't explain to me why MPLP+ can perform so well. In general, MPLP+ tends to do slightly worse than MPLP across almost all datasets. Naively, this would suggest that if it were computationally feasible to run on ogbl-ppa, it would also do very well ~65 Hits@100. But this would still be much much higher than the performance of BUDDY on ogbl-ppa which is ~49. A similar observation stands for ogbl-citation2. I know this is hypothetical as the results don't exist, but I would find it hard to believe that estimating the node counts slightly better would lead to such a huge increase. Also, we don't really have any reason to expect that using walks in MPLP+ would help on the OGBs. In [33] they show that Katz (which just weights these walks) always does same/worse than RA/AA. So that likely can't explain it either for just MPLP+.
Basically, as I mentioned in my original review, I don't really understand why MPLP/MPLP+ does so well, especially compared to BUDDY. Because of that I'll keep my score.
---
Rebuttal 3:
Title: Round 2 [1/2]
Comment: We appreciate the reviewer's feedback. We address the following points in your feedback:
## Empirical advantages of MPLP+'s Norm Scaling and Shortcut Removal. Q2
We thank the reviewer for the insightful comments. In our previous response, we have discussed the theoretical advantages of Norm Scaling and Shortcut Removal. The ablation study in Table 7/8 shows that these two techniques can effectively boost the MPLP's performance, depending on the dataset. We also show in Appendix F.1 that MPLP(+) has better estimation accuracy compared to ELPH/BUDDY.
Here, we further investigate the empirical advantages of Norm Scaling and Shortcut Removal in MPLP+ especially on PPA and Citation2 dataset. We keep all the other hyperparameters the same as the main experiments, and only remove Norm Scaling for PPA and Shortcut Removal for Citation2. The results are shown in the table below:
| | PPA(Hits@100) | Citation2(MRR) |
|---|---|---|
| BUDDY | 49.85 | 87.56 |
| MPLP+ | 65.24 | 90.72 |
| MPLP+ w/o Shortcut Removal | 64.53 | 87.76 |
| MPLP+ w/o Norm Scaling | 53.94 | 90.25 |
As shown in the table, the performance of MPLP+ drops significantly when removing Norm Scaling on PPA. This indicates that Norm Scaling is crucial for capturing the nuanced structural information in the PPA dataset and the driving factor boosting the performance of MPLP+. On the other hand, the performance of MPLP+ decreases when removing Shortcut Removal on Citation2. This suggests that Shortcut Removal is essential for avoiding the distribution shift problem in the Citation2 dataset. Interestingly, the degenerated performance of MPLP+ is close to BUDDY when removing Norm Scaling on PPA and Shortcut Removal on Citation2. This empirically demonstrates that Norm Scaling and Shortcut Removal are effective techniques for improving the performance of MPLP+ over BUDDY.
Since NCN uses the common neighbors' node representation as link representation, it is not directly comparable to MPLP(+). Therefore, it is difficult to conclude that Norm Scaling and Shortcut Removal are not effective for MPLP(+) because NCN cannot exploit the node degree information and shortcut removal technique. We will include this empirical analysis in the revised version.
In fact, MPLP+ with Norm Scaling can be more expressive than NCN. We include a proof sketch in the end of the response.
The more expressive MPLP can face more severe distribution shift problems than NCN (Sec 5.3.2 in [2]). Therefore, the Shortcut Removal technique is more crucial for MPLP to avoid the distribution shift problem compared to NCN.
[2] FakeEdge: Alleviate Dataset Shift in Link Prediction
## Other concerns
We greatly appreciate the reviewer's suggestions. We will revise the paper to more explicitly indicate the walk-based counting mechanism of MPLP+ and the reference of the proposed techniques of MPLP(+). We will also include a discussion when comparing the efficiency results between BUDDY and MPLP.
## Q1: Estimation quality of MPLP+
We thank the reviewer for the suggestion. Similar to experiments in Appendix F.1, we include the experimental results for the walk estimation quality of MPLP+ on Collab below:
| Signature Dimension F | #(1,1) | #(1,2) | #(1,0) | #(2,2) |
|---|---|---|---|---|
| 1000 | 2.0 | 4.4*10^3 | 3.7*10^3 | 1.0*10^11 |
| 1500 | 0.49 | 1.0*10^3 | 2.1*10^3 | 1.3*10^11 |
| 2000 | 0.55 | 5.2*10^2 | 1.5*10^3 | 1.1*10^11 |
| 2500 | 0.23 | 4.3*10^2 | 1.9*10^3 | 1.2*10^11 |
Since there is no other baseline counting walk on graph, we can only investigate the results by itself. As shown in the table, the walk estimation quality of MPLP+ becomes worse when hops increase. This is because with longer hops, the number of walks between two nodes increases exponentially, leading to a higher estimation variance. When it comes to the #(2,2), the estimation variance becomes too high to be useful. Therefore, we observe that the performance gain from 3-hop above neighbors of MPLP(+) is marginal. In the meantime, increasing the signature dimension $F$ can help reduce the estimation variance. We will include this analysis in the revised version.
-----
We hope that we have addressed your concerns. Please let us know if there are any other concerns. We will be happy to respond.
---
Rebuttal Comment 3.1:
Comment: > As shown in the table, the performance of MPLP+ drops significantly when removing Norm Scaling on PPA
I appreciate the experiments but by argument isn't that these strategies can necessarily help, but (a) the impact is limited and (b) this isn't a technique introduced by the authors of this paper (i.e., not novel contributions). These are true as shortcut removal has a marginal impact on ogbl-ppa and norm scaling has the same on ogbl-citation2. I'll admit, I'm also surprised that shortcut removal has such a big effect on MPLP+ on citation2. As shown in the NCN paper (linked in my last response), shortcut removal actual hurts performance of NCN on ogbl-citation2.
> we include the experimental results for the walk estimation quality of MPLP+ on Collab below
Thanks a lot. I agree there's no way to really judge these w/o a baseline (not your fault). However, at first glance the MSE seems high. Compared to the node estimations shown in Fig 8, it's a lot higher. Though I guess part of that is because is because there are just many more walks. Regardless this does suggest the estimation isn't great.
I really do appreciate all the discussion and additional experiments but my main concerns haven't been addressed. So I will keep my score.
---
Reply to Comment 3.1.1:
Title: Round 3
Comment: We really appreciate the reviewer's active engagement and insightful questions during the rebuttal-discussion process. Since the reviewer still has a major concern about the performance advantage of MPLP+ over BUDDY, we want to summarize the key points discussed so far:
## Summary of why MPLP+ can outperform BUDDY on PPA and Citation2:
1. **From the theoretical perspective**: MPLP+ has two key components, Norm Scaling and Shortcut Removal, which can boost the performance of MPLP+ over BUDDY. The Norm Scaling improves the expressiveness of MPLP+ by enabling weighted counting, while the Shortcut Removal avoids the distribution shift problem during training.
2. **From the empirical ablation study**: We conduct an additional ablation study to show that Norm Scaling and Shortcut Removal are crucial for the performance improvement of MPLP+ on PPA and Citation2. The empirical results demonstrate that Norm Scaling and Shortcut Removal are effective techniques for improving the performance of MPLP+ over BUDDY.
3. **From the experimental results**: In Table 2, we show that MPLP+, with distinct components like Norm Scaling and Shortcut Removal, can achieve better performance than BUDDY on both PPA and Citation2. In fact, MPLP+ not only outperforms BUDDY but also achieves state-of-the-art performance on PPA and Citation2 on the OGBL leaderboard.
We believe that the theoretical and empirical evidence provided above can explain how and why MPLP+ can outperform BUDDY on PPA and Citation2. We are eager to know what remains unclear or unsatisfactory to the reviewer and will be happy to provide further clarification. This can greatly help us improve the clarity of the paper and the presentation of the results.
---
Rebuttal 4:
Title: Round 2 [2/2]: Proof Sketch for MPLP's Expressiveness over NCN
Comment: Proof Sketch for MPLP's Expressiveness over NCN:
We present a proof sketch to show that MPLP is more expressive than NCN, specifically because MPLP can enable weighted node counting by Norm Scaling technique (Sec 4.1). Consider a $n_1 \times n_1$ rook's graph and a $n_2 \times n_2$ rook's graph, where $n_1 \neq n_2$. These two graphs are strongly regular graphs, but with different node degrees. Also any non-adjacent nodes in rook's graph will have two common neighbors. We investigate if NCN/MPLP can encode the non-adjacent node pair in $n_1 \times n_1$ rook's graph **differently** from the non-adjacent node pair in $n_2 \times n_2$ rook's graph.
For NCN, (1) the GNN encoder (GCN/SAGE) will generate the same representation for all nodes due to regular graphs; (2) all the non-adjacent node pairs will have two common neighbors, leading to the same NCN score.
For MPLP, because $n_1 \neq n_2$, the Norm Scaling technique will assign different norms to the quasi-orthogonal vectors of nodes in $n_1 \times n_1$ and $n_2 \times n_2$ rook's graphs. Therefore, the non-adjacent node pairs in $n_1 \times n_1$ and $n_2 \times n_2$ rook's graphs will have different MPLP representation due to the weighted node counting. This shows that MPLP is more expressive than NCN. | Summary: The authors of this work introduce the Message Passing Link Predictor (MPLP), a link prediction model that uses pure message passing - as originally proposed by Gilmer et al. - to estimate structural similarities between nodes. The method is motivated by the fact that a single round of message passing using one-hot-encodings of nodes in un-attributed graphs can be used to estimate the common neighbors between node pairs. This idea is used to develop MPLP, a novel link prediction model that uses message passing to construct quasi-orthogonal vectors, whose inner product can be used to efficiently estimate structural similarities. Combining those vectors with node features and shortest path neighborhoods, MPLP obtains node representations that can be used by a classifier to predict links. Moreover, an advanced model MPLP+ is proposed with a simplified estimation of shortest path neighborhoods based on multiple rounds of message passing using one-hop information only. The two resulting models are evaluated in eight unattributed and seven attributed graphs, showing excellent performance compared to nine baseline methods.
Strengths: [S1] Showing that a single round of neural message passing in a Graph Convolutional Network (GCN) and SAGE can be used to estimate the number of common neighbors between node pairs, while multiple rounds of message passing can estimate the number of walks between node pairs, this work provides interesting insights into the design of Graph Neural Networks, as well as its inherent relations to heuristic link prediction techniques.
[S2] The work includes both theoretical and practical insights that make it interesting for a wide range of researchers and practitioners in graph learning.
[S3] The two proposed models are extensively evaluated in large attributed and unattributed graphs, showing promising performance across a wide range of data dets from different contexts.
[S4] The inference time of both proposed models and baseline methods is compared across three large data sets from the OGB benchmark, demonstrating that despite its improved performance the proposed methods are actually more efficient than some of the baseline methods.
[S5] In the appendix, the authors included extensive ablation studies in which they investigated the impact of different structural estimators, the sensitivity to parameter values, and the impact of removing specific components of the architecture.
[S6] The paper is very well-written, clearly motivating the problem and giving a good intuition of the proposed approach. I enjoyed reading this work, although some of the math in the appendix is hard to follow for non-specialists.
Weaknesses: [W1] I could not follow the role of the weight matrix W, which is assumed to be randomly initialized from a zero-mean distribution, in theorem 3.1, see Q1.
[W2] The experimental evaluation does not include results on the training time of the different models.
[W3] The evaluation is solely based on HITS@50, see Q3.
Technical Quality: 4
Clarity: 4
Questions for Authors: [Q1] Referring to W1, what would happen if we assume that the weight matrix is absent, i.e. if we remove the weight parameters in the aggregation function?
[Q2] Related to W2, how does the training time of the proposed models compare to those of the baseline methods?
[Q3] I understand that the authors followed the evaluation metrics of OGB. Nevertheless, it would be good to include additional evaluation metrics such as accuracy, AUC-ROC or AUPR, see e.g. [arXiv 1505.04094].
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The paper includes a detailed discussion of limitations in section H of the appendix, which I consider sufficient for this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We first want to express our gratitude for the reviewer's comment that the reviewer "enjoyed reading this work". For us, it is the most rewarding feedback to know that our work is well-received. We will address the following points in the rebuttal:
## Q1
The weight matrix $W$ in theorem 3.1 does not have any specific role for holding the theorem. We just keep it here because the standard GCN/SAGE formulation has the weight matrix. If we remove the weight matrix $W$, the entire theorem will still hold, except that the constant $C$ will be different as $C = \sigma^2_{node}F$. In other words, the untrained zero-mean-initialized weight matrix $W \in \mathbb{R}^{F^{\prime} \times F}$ will only introduce a scaling factor to the expectation of the inner product.
## Q2
We appreciate the reviewer's suggestion. Generally, it is challenging to provide a fair evaluation of training times for different methods due to variations in learning rates, batch sizes, and early stopping criteria. Since the size of trainable parameters in GNNs is relatively small compared to other deep learning models, training time is usually not a bottleneck. Most of the computational cost in GNNs comes from processing the graph structure (See Table 1 in [7]), such as message passing[1,2] or the labeling trick[3] for link prediction. Therefore, inference time can be used as a proxy for the computational cost of training models. In the Figure 4 of experimental section, MPLP is shown to be more efficient than other methods in terms of inference time, indirectly indicating that MPLP is more computationally efficient than other methods.
## Q3
We thank the reviewer for the suggestion. We follow the evaluation protocol of previous works[4,5] to ensure a fair comparison. While we acknowledge that other metrics like AUC-ROC and AUC-PR are important for evaluating link prediction tasks, recent studies[6] have shown that performance evaluated by AUC-ROC and AUC-PR is nearly saturated for most methods. Therefore, we focus on the evaluation of MRR and Hits@K, which are more sensitive to the performance differences between methods.
[1] SIGN: Scalable Inception Graph Neural Networks
[2] Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks
[3] Labeling Trick: A Theory of Using Graph Neural Networks for Multi-Node Representation Learning.
[4] Open Graph Benchmark: Datasets for Machine Learning on Graphs
[5] Graph Neural Networks for Link Prediction with Subgraph Sketching
[6] Neural Link Prediction with Walk Pooling
[7] MLPInit: Embarrassingly Simple GNN Training Acceleration with MLP Initialization
---
**We hope that we have addressed your concerns. If we have left any notable points of concern unaddressed, please let us know, and we will be happy to respond.**
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses to my questions. In light of those responses as well as the responses to the other reviewers, I am happy to retain my positive evaluation. | Summary: This paper first shows that pure message passing can count common neighbor. Based on the proposed theory, this paper develops MPLP for link prediction, where the common neighbor is an important heuristic feature. Experiments on link prediction demonstrate the performance gains of MPLP over baselines.
Strengths: 1. This paper shows that pure message passing can count common neighbor.
2. The paper is well-written.
3. The authors provide the theoretical analysis and experimental details.
Weaknesses: 1. If MILP aims to count the common neighbor exactly, why not directly use the pre-computed common neighbor? If MILP aims to develop a neural network to approximate common neighbor heuristics, what is the novelty of MILP compared with [1]?
2. To capture structural link representation such as Common Neighbors (CN), Zhang et al. [2] propose labeling trick, which is a position encoding in my opinion. Indeed, the Quasi-orthogonal vector is also a position encoding. I suggest discussing the relationship between Quasi-orthogonal vectors and existing position encodings such as random walk position encodings (RWPE), relative random walk probabilities (RRWP) [3], Resistance Distance encodings [4], and Laplace position encodings (LapPE). Notably, Laplace position encodings are also orthogonal.
3. Is MILP scalable to large-scale graphs? If MILP assigns each node with a unique orthogonal vector, then the orthogonality requires the dimension of the orthogonal vector to be larger than the number of nodes, as the orthogonal vectors are linearly independent.
[1] Neural Common Neighbor with Completion for Link Prediction.
[2] Labeling Trick: A Theory of Using Graph Neural Networks for Multi-Node Representation Learning.
[3] Graph Inductive Biases in Transformers without Message Passing.
[4] Rethinking the Expressive Power of GNNs via Graph Biconnectivity.
Technical Quality: 2
Clarity: 3
Questions for Authors: See Weaknesses.
Confidence: 1
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback. We will address the following points in the rebuttal:
## W1
We apologize for any confusion regarding MPLP. MPLP aims to count the number of nodes at varying distances from a target node pair. Compared to pre-computed heuristics like Common Neighbor, MPLP is significantly more efficient in terms of both time and space complexity. In a graph with $N$ nodes, pre-computed common neighbors require $O(N^2)$ space to store all pairwise information, whereas MPLP only requires linear space $O(NF)$, where $F \ll N$ represents the node signature dimension. Regarding time complexity, MPLP can be seen as an approximation of the labeling trick[2], trading off variance for efficiency. Generally, MPLP is more scalable for large-scale graphs.
Unlike NCN[1], MPLP considers not only the common neighbors but also the nodes beyond the 1-hop neighborhood. This makes MPLP more expressive and powerful in capturing structural representation than NCN. The experimental results in our paper demonstrate the effectiveness of MPLP in link prediction tasks.
## W2
We appreciate the reviewer's suggestion. In fact, our quasi-orthogonal (QO) vectors are used to build a link **structural** representation rather than **positional** encoding [5]. There are distinct differences between position encoding and our QO vectors:
1. Unlike typical position encodings[3,4] designed for graph-level tasks, our QO vectors are specifically targeted for link prediction tasks. Position encodings represent the relative position of each node in the graph, whereas our QO vectors cannot represent individual node positions but can be decoded to represent pairwise **structural representation** for link prediction tasks.
2. When applying position encodings to link prediction tasks, most of them are limited to transductive settings. This is because position encodings represent node position within a specific graph. If graph changes, the position encodings should also change. However, our QO vectors can be decoded as a structural representation and applied in both transductive and inductive settings.
We will add a detailed comparison between QO vectors and position encodings in the revised version.
## W3
We apologize for any confusion regarding the scalability of MPLP. MPLP is designed to have greater scalability for large-scale graphs compared to methods like labeling tricks[2]. Each node is assigned a random **quasi-orthogonal** vector of dimension $F \ll N$. These vectors do not need to be strictly orthogonal but should be orthogonal in expectation. Therefore, MPLP is scalable for large-scale graphs.
[1] Neural Common Neighbor with Completion for Link Prediction.
[2] Labeling Trick: A Theory of Using Graph Neural Networks for Multi-Node Representation Learning.
[3] Graph Inductive Biases in Transformers without Message Passing.
[4] Rethinking the Expressive Power of GNNs via Graph Biconnectivity.
[5] On the Equivalence between Positional Node Embeddings and Structural Graph Representations.
----
**We hope that we have addressed your concerns, and that you will consider raising your score. If we have left any notable points of concern unaddressed, please let us know, and we will be happy to respond.**
---
Rebuttal 2:
Comment: Thanks for your response. Unfortunately, my concerns about Weaknesses 1,2,3 remain unaddressed. The questions are as follows.
1. The authors claim that MPLP is more expressive and powerful than NCN, but I can not find the corresponding theoretical analysis.
2. The shortest path distance is a pairwise structural representation (heuristic metrics), as shown in NBFNet [6]. [3] shows that relative random walk probabilities (RRWP) can express the shortest path distance. Moreover, Resistance Distance encodings [4] are similar in my opinion. Moreover, the discussion with Laplace position encodings (LapPE) [7]---which are also orthogonal---is missing.
3. The graph-level classification is inductive. They first train GNNs on the training set of graphs and then infer on the test set of **unseen** graphs. Why are the position encodings limited to transductive settings?
4. If the random quasi-orthogonal vectors have dimension $F<<N$, then their expectations also have dimension $F<<N$. So why are they orthogonal in expectation? Can you provide the theoretical analysis or reference to prove that "these vectors do not need to be strictly orthogonal but should be orthogonal in expectation"?
[6] Neural Bellman-Ford Networks: A General Graph Neural Network Framework for Link Prediction.
[7] Rethinking graph transformers with spectral attention.
---
Rebuttal 3:
Title: Round 2
Comment: We thank the reviewer for the feedback on our rebuttal. We address the following points in your feedback:
## Concern 1
We show that MPLP is more expressive and powerful than NCN:
**Expressiveness**: We present a proof sketch to show that MPLP is more expressive than NCN, specifically because MPLP can enable weighted node counting by Norm Scaling technique (Sec 4.1). Consider a $n_1 \times n_1$ rook's graph and a $n_2 \times n_2$ rook's graph, where $n_1 \neq n_2$. These two graphs are strongly regular graphs, but with different node degrees. Also any non-adjacent nodes in rook's graph will have two common neighbors. We investigate if NCN/MPLP can encode the non-adjacent node pair in $n_1 \times n_1$ rook's graph **differently** from the non-adjacent node pair in $n_2 \times n_2$ rook's graph.
For NCN, (1) the GNN encoder (GCN/SAGE) will generate the same representation for all nodes due to regular graphs; (2) all the non-adjacent node pairs will have two common neighbors, leading to the same NCN score.
For MPLP, because $n_1 \neq n_2$, the Norm Scaling technique will assign different norms to the quasi-orthogonal vectors of nodes in $n_1 \times n_1$ and $n_2 \times n_2$ rook's graphs. Therefore, the non-adjacent node pairs in $n_1 \times n_1$ and $n_2 \times n_2$ rook's graphs will have different MPLP representation due to the weighted node counting. This shows that MPLP is more expressive than NCN.
**powerful**: In Table 1/2, we empirically show that MPLP(+) can achieve better performance than NCN on various datasets, which demonstrates the effectiveness of MPLP for link prediction tasks.
We thank the reviewer for pointing this out. We will include the proof in the revised version.
## Concern 2
We thank the reviewer for the suggestion. RRWP, Resistance Distance encodings, and LapPE can all provide a pairwise structural representation, especially the distance information. Even though they are used for graph classification tasks, we also believe that there can be a connection between these encodings and link prediction tasks. This can be an interesting direction for future work. We will add a discussion on the connection between RRWP/Resistance Distance/LapPE and MPLP in the revised version, detailing how these positional encodings can also be decoded to provide pairwise structural representation, similar to MPLP, and potentially be used for link prediction tasks. For LapPE, we will also cover that it is derived from the orthogonal eigenvecotrs of the Laplacian matrix, while MPLP's quasi-orthogonal vectors are randomly initialized.
## Concern 3
We apologize for any confusion regarding the inductive ability of positional encodings. Previously, we referred the positional encodings specifically to the one formally defined in [5]. However, the positional encodings, like RRWP/Resistance Distance/LapPE, contains both the positional information and the structural information (this is also discussed in Sec 2 of [3]). We will clarify this when discussing the connection between RRWP/Resistance Distance/LapPE and MPLP in the revised version.
[5] On the Equivalence between Positional Node Embeddings and Structural Graph Representations.
## Concern 4
We apologize for the confusion regarding the orthogonality of the vectors. Consider two vectors $v_1$ and $v_2$ **independently** sampled from a standard normal distribution. The inner product of these two vectors $v_1^Tv_2$ is likely to be zero but **not always**. However, the expected value of the inner product is $E[v_1^Tv_2] = E[v_1]^TE[v_2] = 0$, which means that the vectors are orthogonal in expectation due to the independence of the vectors. This is why we refer to the vectors as quasi-orthogonal such that they are orthogonal from the probabilistic perspective. We will clarify this in the revised version.
-----
We hope that we have addressed your concerns. If we have left any notable points of concern unaddressed, please let us know, and we will be happy to respond.
---
Rebuttal Comment 3.1:
Comment: Thanks for your response. The response has addressed Concern 1. Unfortunately, my concerns about Concerns 2,3,4 remain unaddressed. The suggestions and questions are as follows.
1. The authors may want to discuss the theoretical properties of the above-mentioned position encodings. For example, can the position encodings estimate common neighbors which is the key topic of this paper?
2. Let $h\_i=E[v\_i]$. As the rank of the matrix $(h\_1,h\_2,\dots,h\_N) \in \mathbb{R}^{N \times d}$ is less than $d$, the $N$ vectors $\\{h\_i\\}_{1}^N$ in the $d$-dimensional vector space can not be pairwise orthogonal.
---
Reply to Comment 3.1.1:
Title: Round 3
Comment: We thank the reviewer for the feedback. We address the following points in your feedback:
## Q1: Can the position encodings estimate common neighbors?
We thank the reviewer for the insightful question. Theoretically, the positional encodings like RRWP/Resistance Distance/LapPE can estimate the number of common neighbors between two nodes. We first show that RRWP can estimate the Resource Allocation (RA) score between two nodes with a bias term.
> Consider a graph with two nodes $i$ and $j$ and ajacency matrix $A$. The RA score between node $i$ and node $j$ is defined as $RA(i,j) = (AD^{-1}A)_{ij}$, where $D$ is the degree matrix. For RRWP in [3], the relative positional encoding between node $i$ and node $j$ is $P\_{i,j} \in \mathbb{R}^{K}$. Then, the 3rd element of $P\_{i,j}$ is $P\_{i,j}^{(3)} = (D^{-1}A)^2\_{ij}=(D^{-1}AD^{-1}A)\_{ij}$. Therefore, RRWP can estimate the RA score between node $i$ and node $j$, biased by the degree of node $i$ and node $j$.
Compared to MPLP(+)'s unbiased estimation, RRWP's estimation of RA is biased. Moreover, the computation of RRWP involves adjacency matrix $A$'s power operation, which is much more computationally expensive than MPLP(+)'s pairwise structural estimation. Therefore, MPLP(+) provides a more efficient and unbiased estimation of the number of common neighbors between two nodes.
Similar to RRWP, Resistance Distance and LapPE can also estimate the number of common neighbors between two nodes. Resistance Distance Encoding can estimate the common neighbor between two nodes because of its equivalence to the random walk's ($D^{-1}A$ in RRWP) commute time [4]. LapPE can estimate the number of common neighbors between two nodes because it is derived from the eigenvectors of the Laplacian, which can tell the relative distance information between two nodes [5]. Rigorous investigation for Resistance Distance and LapPE estimation of common neighbors can be an interesting direction for future work.
[3] Graph Inductive Biases in Transformers without Message Passing.
[4] Rethinking the Expressive Power of GNNs via Graph Biconnectivity.
[7] Rethinking graph transformers with spectral attention.
## Q2: Pairwise Orthogonality
We apologize for the confusion. Since the quasi-orthogonal vectors are independently sampled from a standard normal distribution, their expectation is actually zero vector, orthogonal to each other. For $E[v_1^Tv_2] = E[v_1]^TE[v_2] = 0$, the independence of the vectors ensures that we can split the expectation into the product of the expectations. We will clarify this in the revised version.
------
Thanks for the feedback. If there are any other concerns, please let us know, and we will be happy to respond. | null | null | Rebuttal 1:
Rebuttal: Dear Reviewers,
We appreciate your valuable feedback and constructive suggestions when reviewing our paper. Here, we want to address a concern raised by **Reviewer CYaw** regarding the difference between MPLP(+) and ELPH/BUDDY[1]:
## W1: Comparison with ELPH/BUDDY
We appreciate the reviewer's question about the difference between MPLP and BUDDY. We agree that the principle of capturing structural information is similar between MPLP and BUDDY. However, there are several key differences between MPLP and BUDDY:
1. **MPLP is more expressive than ELPH/BUDDY due to the flexibility of orthogonal vectors**: Sec 4.1 introduces that MPLP(+) can perform a weighted count with the norm rescaling technique. In general, by scaling the norm of the vectors according to the node degrees, MPLP can capture more nuanced structural information than BUDDY with a weighted counting mechansim. For example, Resource Allocation(RA) and Adamic-Adar(AA) are two weighted counting methods for Common Neighbor(CN), which shows better overall performance than unweighted counting CN. MPLP generalizes these two methods beyond the 1-hop neighborhood by using degree-normalized orthogonal vectors. However, BUDDY cannot perform a weighted counting due to the mechanism of MinHash and HyperLogLog. Therefore, we believe that the incorporation of RA/AA into MPLP is one of the driving factors boosting its performance. In fact, in the Github repository of BUDDY, BUDDY uses a pre-computed RA score as the edge feature to improve the performance on OGBL-PPA, which also suggests the effectiveness of RA, but, in our humble opinion, violates BUDDY's beauty of only using node-level features/signatures to perform link prediction tasks.
2. **ELPH/BUDDY can cause a distribution shift problem, while MPLP will not**: Sec 4.2 introduces that during training, MPLP(+) will perform a shortcut removal for the positive node pairs to avoid the distribution shift problem[2]. This technique is critical for both ELPH/BUDDY and MPLP. Consider a target node pair $(u,v)$ and another node $w$ that connects to node $u$ only but not $v$. If $(u,v)$ is a positive node pair, there is an edge connecting $u$ and $v$. When counting the 2-hop neighbors for node $v$, $w$ will always be counted because it can exploit the node $u$ and the link between $u$ and $v$ to reach node $v$. This will cause a distribution shift problem because the model can easily distinguish the positive node pairs from the negative node pairs, just during training. MPLP(+) will perform a shortcut removal for the positive node pairs to avoid this problem. However, ELPH/BUDDY will not perform this shortcut removal, which will cause a distribution shift problem.
- In fact, I believe that the authors of BUDDY are aware of this problem but have not addressed it in their paper.
In their code, they have a `use_zero_one` flag to always discard the 2-hop neighbor counts due to the issue above. Recently, the Github repo of BUDDY has been updated with a new feature branch `rewiring`, where they also start to implement a shortcut removal technique to avoid the distribution shift problem. It can be found in the `src/datasets/elph.py` file's `remove_bridges` function.
3. **MPLP+ is walk-based**: MPLP+, different from ELPH/BUDDY, estimates the number of walks (Theorem 3.3) rather than nodes between two nodes. It is still an open question whether walk-based methods are better than node-based methods for link prediction tasks (MPLP is node-based). However, MPLP+ shows better empirical performance than ELPH/BUDDY in our experiments.
4. **Lower estimation variance**: With the same computational budget, MPLP can perform the pairwise structural estimation with lower variance than ELPH/BUDDY. This is empirically shown in Appendix F.1.
We want to again appreciate the reviewer's in-depth question on the difference between MPLP and BUDDY. While we have discussed the differences above in Appendix A, we have not dived into much details in the main paper to avoid distracting the reader. We will consider adding the above detailed comparison between MPLP and BUDDY in the revised version.
[1] Graph Neural Networks for Link Prediction with Subgraph Sketching
[2] FakeEdge: Alleviate Dataset Shift in Link Prediction | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Semi-Random Matrix Completion via Flow-Based Adaptive Reweighting | Accept (poster) | Summary: This paper presents a new algorithm for matrix completion in the semi-random setting, where each entry is revealed independently with a probability at least $p = \frac{\mathrm{poly}(r, \mu, \log d)}{d}$. The paper provides a promising approach for designing fast matrix completion algorithms that are robust to semi-random sampling, which is argued to be a more realistic model than uniform sampling in practice. The insights may enable future work on more practical algorithms for semi-random matrix completion.
Strengths: 1. The paper provides the first high-accuracy and noise-tolerant nearly-linear time algorithm for matrix completion in the semi-random setting, improving upon previous work in terms of accuracy, condition number dependence, and noise handling.
2. The iterative method based on local reweighting and adaptive subproblems is a novel approach that enables the improved guarantees. Using flow algorithms for the reweighting subproblems is a key technical innovation.
3. The polylogarithmic dependence on the target accuracy $\epsilon$ and lack of dependence on condition number are significant improvements over previous nearly-linear time algorithms.
Weaknesses: 1. While the paper provides a promising theoretical foundation, the polynomial factors in the sample complexity and runtime may still be too high for immediate practical deployment, as acknowledged by the authors. Further work may be needed to obtain more practical parameter dependencies.
2. This paper does not sufficiently discuss how it fits into the broader field of matrix completion algorithms with practical implementation. For example, [1] considers the matrix completion problem via robust alternating minimization, which can tolerate errors caused by approximate updates. Both papers consider the practical implementation of the matrix completion algorithm. What are some similarities and differences between these two (or some other related) matrix completion papers?
3. The paper focuses on the theoretical algorithm and analysis but does not include empirical evaluation of real-world datasets. Experimental validation of the robustness and efficiency benefits would strengthen the claims.
[1] Yuzhou Gu, Zhao Song, Junze Yin, and Lichen Zhang. “Low rank matrix completion via robust alternating minimization in nearly linear time.” ICLR 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. One of the main motivations of this paper is to consider the matrix completion problem under the semi-random setting, which is a more realistic setting than uniformly random. Meanwhile, the paper mentions, “both our dependence and the dependence of [16] are somewhat large at the moment, limiting immediate practical deployment”. I understand the difficulties of making theoretical breakthroughs, but since this paper is motivated by the practical setting, I want to ask, how much work is still needed before the practical deployment.
2. A minor comment: Each definition, theorem, and proposition should be more self-contained. There are some notations in these blocks that should be more explicitly defined.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: As mentioned by the authors, the main limitation is “the modeling assumptions and runtimes of the algorithms may not be practical yet”. Other than that, I do not find obvious limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your questions about practicality; we addressed these in our meta-comment. We hope our response clarified the motivations and focus of our work.
In terms of how much additional work would be necessary for our work to become practical, it is hard to make such a prediction, but we discuss one potential avenue here. The current $\text{poly}(r)$ is quite large; we estimate it is $\geq 30$. However, this is primarily bottlenecked by the application of our iterative method in Appendix C, as carried out in Line 903. This step natively has a dependence of $r^7$, but incurs a further dependence on $\text{poly}(1/\gamma)$; right now, our techniques require taking $\gamma = 1/\text{poly}(r)$, where gamma is the fraction of dropped rows in each step. If we could modify our framework to allow for $\gamma = 1/r^{o(1)}$, then we expect the overall $\text{poly}(r)$ to just be the aforementioned $r^7$ (which could also potentially be improved). Prior matrix completion work in the fully random model by [Kelner, Li, Liu, Sidford, Tian ‘23] was indeed able to achieve $\gamma = 1/r^{o(1)}$, so this is a natural direction for future research.
The above would already reduce the $r$ dependence significantly and moreover, it is possible that in practice, we would not need to pay as many extra factors of $r$ as required by the theoretical analysis. For instance, some of the extra factors may not end up being necessary for most instances in practice. We hope that the general framework we propose, of designing reweighting steps with a certificate that guarantees progress, can be a useful paradigm for designing more robust algorithms, and that we can replace the most computationally expensive step in our algorithm, the flow-based solver, with a faster heuristic.
We will add a more thorough discussion of existing literature on matrix completion. However, we emphasize that the paper [1] referenced (and all other iterative matrix completion algorithms in the literature aside from [Cheng, Ge ‘18] which we discuss in depth) only work in the case of fully random matrix completion – in particular they heavily rely on the assumption that the entries are all observed with the same probability. Our algorithm crucially works in a more general semi-random model where the observations are not necessarily i.i.d.
Neither our paper nor [1] has implementations yet (in particular, [1] has no experimental evaluation). In terms of theoretical guarantees, compared to [1], we give improvements in a few notable aspects even in the fully random case. While [1] discusses how their alternating minimization steps are robust, they do not seem to give any guarantees when the observations are noisy – their main theorem is only stated when the observations are exact. Also, their sample complexity and runtime depend polynomially on the condition number of the matrix, whereas our dependence is logarithmic. On the other hand, [1] has better dependence on the rank $r$ and incoherence parameter $\mu$ (both papers are polynomial in these parameters).
We appreciate the feedback about self-containedness of statements, and will take a thorough pass in the revision in an effort to make theorems, etc. more self-contained.
We hope this discussion elevates our paper in your view; thanks for all your detailed feedback.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your detailed response. I have updated my score. | Summary: This paper proposes an algorithm that is able to achieve high-accuracy in semi-random matrix completion with nearly linear time. The key innovation lies in using a flow-based adaptive reweighting scheme to mitigate the bias introduced by the adversary. This technique effectively identifies and downweights the potentially misleading entries, allowing for accurate recovery of the underlying low-rank matrix. The authors provide theoretical guarantees for their algorithm, demonstrating its ability to achieve near-optimal sample complexity.
Strengths: 1. The paper introduces a unique flow-based reweighting strategy specifically designed to handle the challenges posed by the semi-random adversary. This approach effectively leverages the underlying structure of the problem to identify and mitigate the adversary's influence.
2. The theoretical analysis demonstrates that the proposed algorithm achieves near-optimal sample complexity, meaning it requires a near-minimal number of observed entries to accurately recover the underlying matrix.
3. The flow-based reweighting scheme is flexible and can be adapted to different variations of the semi-random model with considerations of the noise. This adaptability makes the approach potentially applicable to a wider range of practical scenarios.
4. The paper provides a clear presentation of the problem, the proposed algorithm, and its theoretical analysis. The technical details are presented rigorously, making the contributions easy to understand and evaluate.
Weaknesses: 1. While it outlines the general approach, it lacks concrete pseudocode or even finished code, making it difficult for readers to fully grasp the implementation details.
2. Unlike the work of Kelner et al 2022, the paper only considers a specific type of semi-random adversary. Exploring how the method performs against different adversarial models, including more powerful adversaries that can manipulate the values of the observed entries, would strengthen the contributions.
3. The paper's claim of near-linear time complexity is not sufficiently substantiated and demonstrated in empirical setting. The authors does not provide a detailed analysis of the polynomial factors involved in the runtime. This makes it difficult to compare its efficiency to existing methods with the similar magnitude. Furthermore, the absence of empirical validation leaves it unclear how much practical improvement the algorithm actually achieves in real-world settings.
Technical Quality: 4
Clarity: 3
Questions for Authors: Could you provide more details on the practical implementation of your algorithm? Specifically, how does it handle very large datasets in terms of memory usage and computational overhead?
Confidence: 2
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: See weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your questions about practicality; we addressed these in our meta-comment. We hope our response clarified the motivations and focus of our work. We agree that to further develop this line of research, it is an important open direction to achieve simpler and more practical instantiations of our framework, which by itself likely requires new ideas.
Re: your question about our adversarial model, our model is the natural extension of the semi-random sparse recovery adversary [Kelner, Li, Liu, Sidford, Tian ‘23] to the matrix completion setting. The only additional generality the [KLLST23] model affords is the ability to mix observations to achieve RIP. This additional generality appears to have little meaning in the matrix completion case, as (1) there is no global “for all directions” type statement such as RIP to assume, and (2) the structure of observations being single entries is inherent to matrix completion, which makes mixing observations lose meaning. Re: manipulating the values of observed entries, note that we do allow for noise in our model (and give recovery guarantees parameterized by the noise size), which is exactly manipulating observed entries.
We would like to note that we believe our paper describes all the implementation details for its various algorithms, as well as full proofs of correctness. We apologize if these details were not sufficiently clear; we will take additional clarification efforts in a revision, and if you have specific parts you would like clarified, please let us know.
We hope this discussion elevates our paper in your view; thank you for your detailed feedback.
---
Rebuttal Comment 1.1:
Comment: Your response clearly addressed my concerns. Thank you so much. I believe the paper has included sufficient details for the algorithm to be implemented. | Summary: This paper considers the semi-random matrix completion problem. Given an unknown rank-r matrix M, each entry of the matrix Is observed independently with a probability p_{ij} at least p. The goal is to find a matrix close the matrix M.
The sap-based algorithm can achieve the nearly-optimal sample complexity, but is not fast compared to iterative method. The previous work shows an iterative method which has a sample complexity and runtime has a polynomial dependence on epsilon.
This paper provides a fast iterative method that achieves the sample complexity and runtime with polylogrithmic dependence on the accuracy epsilon. It has no dependence on the condition number and can also handle noise in observations.
Strengths: 1 This paper provides a clear motivation to consider the semi-random matrix completion problem and clearly explains the limitations of previous methods.
2 This paper provides an interesting fast iterative method for this problem and shows that it achieves improved sample complexity and runtime.
3 This paper utilizes the techniques by Kelner, Li, Liu, Sidford, and Tian and make interesting modifications based on new sights on this semi-random problem.
Weaknesses: -
Technical Quality: 3
Clarity: 3
Questions for Authors: -
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your reviewing efforts. We appreciate that you found our insights interesting and our exposition clear. | Summary: This paper consider semi-random matrix completion via flow-based adaptive reweighting.
The main result is the first high-accuracy nearly-linear time algorithm for solving semi-random matrix completion, and an extension to the noisy observation setting.
I am not familiar with the area of matrix completion.
Strengths: No
Weaknesses: NO
Technical Quality: 3
Clarity: 3
Questions for Authors: NO
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NO
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | null | Rebuttal 1:
Rebuttal: We thank all of the reviewers for their reviewing efforts and feedback.
Several reviewers, e.g., cp6D and RnDV, asked us to address our method’s practicality. We acknowledge that our work’s primary contribution is theoretical. However, we note that previous practical matrix completion algorithms (e.g., alternating minimization and other gradient methods) both (1) encountered conceptual barriers towards generalizing to the semi-random model (due to their reliance on independent observation probabilities), and (2) often failed to work reliably on practical instances, as referenced in Lines 42-45. We believe the fact that our new, near-linear time, approach provably succeeds in this challenging model is an exciting and promising proof-of-concept that could facilitate future empirical research. We emphasize that no previous algorithm achieved a comparable result to Theorem 1.
We acknowledge that we have not implemented our algorithm yet, and that there are several natural avenues for significantly improving its complexity, which could make it more practical. We view our work as an important first step towards developing practical algorithms for matrix completion that are robust to these types of real-world noise, and we believe that our work could enable future research in this important direction. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: In this paper, the authors study semirandom matrix completion. In their models, entry $ij$ of a $\mu$-incoherent, symmetric ground truth rank-$r$ matrix $M^{\star} \in \mathbb{R}^{d \times d}$ are revealed independently with probability $p_{ij}$, where $1 \ge p_{ij} \ge p$ for some parameter $p$ that is a function of $r$ and $d$. The assumption that $M^{\star}$ is symmetric is not important, as we can reduce the asymmetric setting to the symmetric one with minimal overhead.
An important feature of this model is that the entry reveal probabilities are allowed to vary. This contrasts against standard assumptions in the matrix completion literature which typically stipulate that the entries are revealed i.i.d with probability $p$. Although revealing more entries of the matrix may seem to help the learning algorithm learn the underlying matrix, many algorithms fail under this helpful adversary. This is evidence that these algorithms have overfit to a statistical assumption implicit in the problem statement.
A prior work [CG18] studied this problem and gave some recovery guarantees based on a global reweighting scheme for the matrix, but this algorithm has certain undesirable properties -- it is not a high-accuracy algorithm, it is not robust to outliers, and it has a dependence on the conditioning of the matrix. The present work addresses all of these issues and gives a nearly linear time algorithm that achieves an $\ell_{\infty}$ recovery guarantee.
The main technical contribution builds off of a prior work concerning semirandom sparse recovery [KLLST23]. In particular, the algorithm tries to find a progress direction in each step that can be written as the sum of a "short" matrix (one that has small Frobenius norm) and a "flat" matrix (one that has low spectral norm). This is chosen by a combination of dropping heavy rows/columns of $M$ that witness a large Frobenius error in the current iteration along with a fast reweighting algorithm that finds a candidate descent direction that admits the short-flat decomposition. The fast reweighting algorithm itself is a Frank-Wolfe type algorithm over a set of valid reweightings and short matrices. The runtime arising from this subproblem follows from observing that the set of valid reweightings is in fact a rescaled bipartite matching polytope, which means fast matching algorithms can be called as a black box, and implementing a separate algorithm to optimizer linear functions over the set of short matrices.
* [CG18] Non-Convex Matrix Completion Against a Semi-Random Adversary (https://arxiv.org/abs/1803.10846)
* [KLLST23] Semi-Random Sparse Recovery in Nearly-Linear Time (https://arxiv.org/abs/2203.04002)
Strengths: This paper constitutes an important step towards understanding algorithms that don't implicitly overfit to statistical assumptions present in the model. In practice, one probably shouldn't assume that entries are revealed i.i.d with distribution Bernoulli(p). The model studied in this paper (Definition 2) is much more realistic than the i.i.d random entry model.
Furthermore, the improvements in this work over the previous state of the art, [CG18], are quite strong. In particular, the authors get a condition-free, high-accuracy recovery guarantee in $\ell_{\infty}$. Furthermore, the algorithm runs in nearly linear time in the number of revealed entries.
The paper is very well written and the main ideas are clearly presented. The algorithmic ideas will be very interesting to anyone in the algorithmic statistics and convex optimization communities.
Weaknesses: The rank of the matrix that's output can be $poly(r)$ than the ground truth. Since this paper seems mostly concerned with $r \ll d$ and independent of $d$, this is morally not a big deal. Also, what is the polynomial? I did some very cursory searching but didn't find it explicitly written down (all I see is that it's bigger than $r^6$).
The error depends on the largest entry of the noise -- this might be a bit pessimistic (e.g. if the noise matrix has just one nonzero entry, but the corresponding index isn't revealed to the learner, then this shouldn't affect anything). The authors explicitly mention getting a more "optimistic" dependence on the noise as an interesting open direction for future work.
Technical Quality: 4
Clarity: 4
Questions for Authors: What are the challenges, if any, to extending the results to an adaptive adversary? For example, consider an adversary that perhaps samples several entries of $M$ with probability $p$, then upon seeing the sampled entries, reveals an extra arbitrary set of their choice? I ask these because it seems suggested that the algorithm you presented actually does work in such a settings (lines 54-62), but I don't see something in Definition 2 to this effect.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your encouraging review and kind comments. We appreciate that you found our paper well-written and its ideas of general interest.
The rank of the output as currently written can be quite large, we estimate the exponent in the $\text{poly}(r)$ as $\geq 30$. The losses stem from the fact that the iterate that is fed into the postprocessing step already has $r’ = \text{poly}(r)$ rank and then there are additional $\text{poly}(r’)$ losses in translating between $\ell_2$ and $\ell_{\infty}$ norms in the postprocessing. We remark that if our final error guarantee were in Frobenius norm instead of max-entry error, then we could simply do a top-$r$ PCA at the end and reduce our output to rank exactly r. We chose to write the error guarantee in terms of max-entry error so that it is compatible with the assumptions on the observation noise (note that doing a top-$r$ PCA does not necessarily preserve the max-entry error). We think the exploration of using Frobenius norm-based potentials within our framework is an exciting direction for follow-up work.
We agree with and appreciate the comments about fine-grained guarantees (e.g., $\ell_2$ vs. $\ell_{\infty}$, and dependencies on the number of revealed entries) being an important future research direction.
The main reason the current error depends on the largest entry of the noise is that the semi-random adversary could choose to reveal all of the entries where the noise is large, which effectively scales up the noise rate. We believe exploring the use of $\ell_2$-based certificates rather than $\ell_{\infty}$ could significantly improve our $r$ dependence (which currently uses several norm conversions, e.g., to make the Frobenius bound in Lemma 7 compatible with the entrywise guarantees of Lemma 13), and is important to better understand in the semi-random model.
The main difficulty with extending to an adaptive adversary is that it is no longer possible to split the samples into independent batches, e.g., we cannot assume that we have fresh randomness in each step of the iterative method. This makes the analysis significantly more challenging because the observations can depend on our current iterate. We mention that variants of sample splitting analyses caused major difficulties in previous works even in the fully random case, as discussed in Section 3 of [Recht, 2011], so there is precedent for this challenge. We will revise our phrasing in Lines 54-62 accordingly to be more clear; thank you for the comment.
---
Rebuttal Comment 1.1:
Title: response to author response to review
Comment: Thanks so much for the detailed response :) | null | null | null | null | null | null |
Polyhedral Complex Derivation from Piecewise Trilinear Networks | Accept (poster) | Summary: The paper proposes a theoretical link between the tropical algebraic interpretation of ReLU-based neural networks and surfaces represented by signed distance functions trained on those networks. Based on that theoretical interpretation, a surface extraction algorithm is proposed.
Strengths: (1) The mathematical notation is good overall, with good choices for the symbols and their use.
(2) The theoretical framework is interesting and it is a good application of tropical algebra.
Weaknesses: (1) I do not like the current presentation of the paper. I understand that it is theoretical, but its focus on lower abstraction details from the very beginning makes it a niche paper in NeurIPS. For example, the Experiments section focus on the details of the empirical settings, moving the actual results to an Appendix. Another example is that there is little effort in providing an overview of the method. Overall this is the core design of the paper: focus on the definitions and details before properly motivating them. In its current form, this paper is more aligned with an applied mathematics journal. This is the core reason of my current rating and fixing that problem is not possible without rewriting the paper. However, I think the theoretical framework is sound so I am not recommending rejection. All other following weaknesses derive from this presentation problem.
(2) This paper needs graphical intuitions to assist unexperienced readers. For example, showing the plot of a simple tropical polynomial, with the hypersurface points highlighted, would give a good intuition of the rationale behind Definition 3.1 (i.e. the linking points between each piece-wise linear region must be where two monomials are equal). If space for the images is a problem, I would advise to include them in the supplementary material, with references in the main text. This more didactical approach would increase the public that could appreciate the paper.
(3) Another example is the subdivision scheme. Showing an schematic image of the subdivision would help understanding.
(4) I also think a better overview image is needed. Figure 1 does not make a good overview because its abstraction level is too high. A better image should refer to core symbols in the definitions. Not having a proper overview makes understanding the motivations behind some definitions more difficult. For example, $\psi$ and $H$ are defined in Definition 4.1, and I could not find the motive for them in the main text. They are only used at Proposition D.2, at the Appendix. This kind of graphical intuition is even more important in a geometrical problem, which is the case of this paper.
(5) The extracted surfaces have noise and discretization problems, probably because of the grid-based approach and interpolation. Grid-based methods are known to introduce discretization problems (Neural Geometric Level of Detail - NGLOD, Instant Neural Geometric Primitives - Instant-NGP).
(6) The evaluation is lacking. The paper only compares against classic marching cubes. Being a classic algorithm, several improvements were introduced by more recent papers over the years.
Technical Quality: 2
Clarity: 1
Questions for Authors: (1) Typo in line 426: $x^3 + 2xy + y^4$ should be $x^3 \oplus 2xy \oplus y^4$. This is important because an unexperienced reader may get stuck in there.
(2) I believe the notation $v_j^{(i)}(x)$ in Proposition 3.3 is a little bit ambiguous. It is not immediately obvious that the $j$-th component is from the result of the layer computation $v^{(i)}(x)$ or if it is the $j$-th weight in layer $(i)$. In the second case $v_j^{(i)}(x)$ would have a weird interpretation, but I can see readers being stuck in there. Changing the notation probably would have consequences, but at least saying that the component $j$ is from the resulting vector $v^{(i)}(x)$ in the commentary that follows the definition would avoid that ambiguity.
(3) Also in Proposition 3.3 I would add an intuition commentary afterwards saying that $v_j^{(i)}(x)$ is a tropical polynomial, thus $v_j^{(i)}(x) = 0$ is an hyperplane composed of the points where two monomials are equal, intuitively connecting Proposition 3.3 with Definition 3.1.
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Yes, section 7 describes limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review and constructive feedback. We appreciate your recognition of the theoretical soundness of our framework. We would like to address your concerns regarding the presentation of our paper.
**W1. I do not like the current presentation of the paper.**
- We understand your concern about the paper's focus on lower abstraction details from the beginning. Our intent was to provide a solid theoretical foundation before delving into the empirical results, as we believe this balance is crucial for a comprehensive understanding. We are not simplpy "moving the actual results to an Appendix." The main results are demonstrated in Table 1 and Figure 2, showcasing the improved Chamfer efficiency across multiple benchmarks for six objects and both small and large models. Tables 2 to 8 in the Appendix provide detailed numbers and are organized separately for readers who may wish to examine the specifics.
**W2. This paper needs graphical intuitions to assist unexperienced readers.**
- Our new **Figure B in the rebuttal PDF** could enhance clarity as you requested. It visualizes the regions of a tropical polynomial for inexperienced readers.
**W3. Another example is the subdivision scheme.**
- Although we have illustrated our curved edge subdivision scheme in Figure 1 (2a-c), **Figure C in the rebuttal PDF** shows a schematic of the subdivision progress, as you suggested. We hope this feedback may resolve your concerns.
**W4. I also think a better overview image is needed... For example, $\psi$ and $\mathrm{H}$ are defined in Definition 4.1, and I could not find the motive for them in the main text.**
- In Definition 4.1, $\psi$ represents trilinear interpolation, and $\mathrm{H}$ refers to the target features to be interpolated. In Definition 4.3, we describe $\tau$ as "using the trilinear interpolation of spatially nearby learnable vectors on a three-dimensional grid." HashGrid (Muller et al., 2022) assigns features from hash table entries to $\mathrm{H}$ through hashing, with weights based on their relative positions in the grid. Thus, when $\tau$ is instantiated with HashGrid, $\psi$ in Equation 5 corresponds to $\tau$, where $\mathrm{w}$ represents a position in a normalized unit grid and $\mathrm{H}$ consists of the hashed features.
- Although HashGrid, which won the best paper award at SIGGRAPH'22, is well-known, we may have assumed readers would be familiar with its formal definition (our bad.)
- Based on your suggestion, we will consider enhancing our manuscript with textual and graphical explanations. Meanwhile, we recommend you refer to Figure 3 in Muller et al. (2022) for a visual explanation of their use of trilinear interpolation.
**W5. Discretization problems, probably because of the grid-based approach and interpolation.**
- For mesh extraction, we would like to note that sampling-based methods, e.g., MC, severely suffer from discretization, especially in low-resolution regimes. For implicit surface learning, while sinusoidal positional embedding is also known for attenuating the spectral bias, it suffers slow training/rendering time, necessitating heavy density/SDF networks compared to HashGrid. For ours, one of the possible solutions is carefully adjusting multi-resolution levels and resolution growing factors to spread multi-resolution grids more evenly.
**W6. The evaluation is lacking. The paper only compares against classic marching cubes. Being a classic algorithm, several improvements were introduced by more recent papers over the years.**
- We are happy to supplement on this matter. We provided experiments to compare with MT, NDC, and using a mesh simplification, QEM (Quadric Error Metrics; Garland & Heckbert, 1997). For details, please refer to our global responses **G.2** and **G.3** and the corresponding new **Figures A, D, E, and F in the rebuttal PDF**. We present both qualitative and quantitative results to highlight the advantages of our method.
- The "several improvements over the years" appear to be tailored to each method's specific niche, each with its own pros and cons (refer to G.2). In our benchmarks, MC, utilizing our modernized library PyMCubes, outperforms others in terms of efficiency, establishing it as the de facto method.
**Q1. Typo in line 426**
- Actually, this is not typo; but it embraces ambiguity. We want to say the *classical* polynomial $x^3 + 2xy + y^4$ would be rewritten for a *tropical* polynomial by naively replacing + with $\oplus$ and $\cdot$ with $\odot$, respectively: $x^3 + 2xy + y^4$ -> $x \odot x \odot x \oplus 2 \odot x \odot y \oplus y \odot y \odot y \odot y$. After this, we rewrite by the definitions of tropical operations as $\max(3x, x + y + 2, 4y)$. This example (ours with max notation) comes from the Wikipedia (https://en.wikipedia.org/wiki/Tropical_geometry) and we believe this is a convention how they refer the *classical*/*tropical* polynomials. But, we would like to clarify that in revision.
**Q2. Notation in Proposition 3.3**
- Yes, the component j is from the resulting vector $v_j^{(i)}(x)$. We stated "the subscript j denotes the j-th element (neuron index; the j-th element in the intermediate output), assuming the hidden size of neural networks is H." Since we define j in \[1, H\] we thought readers would follow. Your suggestion is valuable and will be incorporated into the update.
**Q3. Also in Proposition 3.3 I would add an intuition commentary afterwards saying that...**
Great point! We appreciate that.
---
Rebuttal Comment 1.1:
Comment: Thank you for considering the review and the efforts to answer the comments. I have no further questions.
---
Reply to Comment 1.1.1:
Comment: Thank you for your prompt response.
We humbly ask that you please inform us if you have any remaining major issues that could affect the rating so we can address them during the author-reviewer discussion period. In particular, we have included Figures B and C in the rebuttal PDF to address your main concerns. | Summary: The paper extends the idea of extracting the polyhedral complex from ReLU networks to extracting a more general cell complex from ReLU networks with a positional encoding, specifically, trilinear encodings, such as HashGrid and TensoRF, commonly used in neural field representations of geometries or scenes.
It first theoretically extends the concept of piecewise linear to piecewise trilinear networks. It then proposes a modified *curved* edge subdivision algorithm to extract a polyhedral mesh approximating the neural implicit surface. The key assumption governing the approximation error is the eikonal loss, which ensures that the isosurfaces of the trilinear functions are close to planar.
The algorithm is validated on several shapes qualitatively and quantitatively, comparing to marching cubes in terms of the mesh size, accuracy, and time. It demonstrates sparser meshes at an increased computational cost.
Strengths: The attempt to generalize the insights from piecewise linear networks to ReLU networks with positional encodings is well motivated and holds a lot of promise. To the best of my knowledge this is the first attempt at such a characterization as well as a practical implementation. The theoretical aspect of the work is in my view the most clear contribution (however, I could not fully verify the details).
Weaknesses: Despite the method being able to handle volumetric complex of the input space, the the work only considers the "skeletonized" complex, i.e., the boundary mesh. At least briefly describing the former would make for a more rounded story. Instead, the skeletonized method is framed as a potential alternative to marching cubes, which unfortunately is not convincing from the results. The only demonstrated advantage is in terms of the mesh size. This argument would be more convincing given a comparison to marching cubes with some basic mesh post-processing. The work would also benefit from comparing the same procedure to piecewise linear networks and the underlying edge subdivision method in terms of training time, extraction time, mesh size, and accuracy.
Similarly, while the method is evaluated for different positional encodings, the neural network size remains fixed (3x16) making it hard to judge how the method scales in practice.
The topic and the proposed method are nuanced and technical, so it does not help that there are many imprecise statements, including in theorems, and poor phrasing, which render the manuscript hard to follow (see questions and suggestions below). The authors also overlook the chance to help the reader with more visual aids.
Overall, I think the work has great potential which is not fully realized and publication-ready due to the unrefined presentation and somewhat uninspiring application.
Technical Quality: 2
Clarity: 2
Questions for Authors: - In Theorem 4.5, is it true that $\tau$ is a trilinear function? While $\tau$ is defined elsewhere in 4.3, this assumption is missing in the theorem itself which is confusing. You should also clarify in the manuscript how trilinearity is used in the proof, e.g. in (41) - (43).
- WIth that, could you clarify Theorem 4.5? My best understanding is that it states that the only trilinear function which satisfies the eikonal equation is a plane, but a counterexample to this is easy to conceive.
- In Line 221, do you perhaps mean $ |\tilde{\nu}(\mathbf{x})| \leq \epsilon$? Otherwise, you select the edges of the sub-level set, instead of the level set.
- Why do you use the $256^3$ marching cubes mesh as a ground-truth instead of using the input mesh on whose SDF you train?
- Can you comment on the extraction of the volumetric mesh? Is my understanding correct that this is what you have before the skeletonization step? Does this and your method more broadly offer a way to explain the success of trilinear positional encodings in terms of the underlying complex?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The limitations paragraph focuses solely on the runtime of the method. The section would benefit from reiterating and discussing the reliance on the eikonal constraint, which is known to be hard to satisfy globally. I would also urge the authors to consider discussing the limitations of the experiments as described in the weaknesses.
**Errors and suggestions**
- The motivation of discussing the normals in 4.3 is unclear.
- All the left-hand terms in 40-43 should be squared. The rhs in (40) is the square of the gradient norm, not the gradient norm.
- The work would benefit from a discuss on the size of the NN and the encoding for practical applications. To my surprise, [12] uses similarly sized networks to your 3x16 with decent performance.
- Line 19: "polyhedral" -> 'polyhedron"
- Line 50: "This provides a theoretical exposition of the eikonal constraint" is unclear
- Line 59: you should give some examples of where marching cubes are used in a learning pipeline and explain what you mean by redundancy (probably flat regions which are described by many triangles?)
- Equation 2: I would not recommend overloading the notation for $d$
- Definition 3.2: what do you mean by "without loss of generality" when defining the NN?
- Line 109: "subscription" -> "subscript"
- Line 110: "for rather discussions as an SDF" is entirely unclear
- Lines 112-114 are very unclear
- Line 127: "subdivising" -> "subdividing"
- Definition 4.1: what is $\psi$? You should also define/explain $F$ before using it. Why is the definition given for general $D$ if this is a trilinear interpolation?
- Line 203: "also the" -> "also due to the" ?
- Line 248: ", similarly" is not needed
- Line 259: Can you be more precise with the objective? The goal is to get a boundary mesh of the zero-set of the SDF.
- Line 262: reformulate to use a verb
- Line 276: CE is not defined in the main text.
- Line 294: "plenary" -> "planarity"?
- Line 560: do not write "by the way"
- Overall, please check the text with a spell and/or style checker
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your efforts and the detailed comments you provided. We are particularly pleased that the theoretical aspects of the work are considered a significant contribution.
**At least briefly describing the former would make for a more rounded story.**
- In Sec. 1, we introduce mesh extraction from Continuous Piecewise Affine (CPWA) functions, which are limited by spectral bias and typically use nonlinear positional encodings for this limit. Consequently, applying CPWA mesh extraction techniques becomes challenging. In Sec. 2, we discuss how many previous works face significant challenges due to the computational cost, with exponentially growing linear regions. Linear regions grow polynomially with the hidden size and exponentially with the number of layers (Zhang et al., 2018; Serra et al., 2018). These issues were our key motivations.
- The elaboration on your "handling volumetric complex" would help provide specific feedback on that matter.
- Also, could you refer to **our global response G.1** for "Why does the chamfer efficiency matter?"
**This argument would be more convincing given a comparison to marching cubes with some basic mesh post-processing.**
- We provided supplementary experiments to compare with MT, NDC, and using a mesh simplification, QEM (Quadric Error Metrics; Garland & Heckbert, 1997). Please refer to our global responses **G.2** and **G.3** and the corresponding new **Figures A, D, E, and F in the rebuttal PDF**. We present qualitative and quantitative results on them.
**The work would also benefit from comparing the same procedure to piecewise linear networks...**
- We are afraid you misunderstand. Our motivation stems from the issue of spectral bias (Tancik et al., 2020) in implicit neural surface learning methods, which necessitates the use of positional encoding, such as HashGrid (Muller et al., 2022). Therefore, exploring mesh extraction from these types of networks is a key focus of our work. Comparing our method to piecewise linear networks (first, it is hard to learn surface naively due to the spectral bias.) could be distracting and misguided, as our goal is not to argue that HashGrid improves quality in addressing spectral bias -- this has already been well-discussed and established in previous works.
**The neural network size remains fixed (3x16)?**
- Networks using HashGrid (Muller et al., 2022) typically have **a small hidden size of 64 and only one hidden layer**, even in implicit neural surface learning, NeuS2 (Wang et al., 2023). This is because HashGrid contains a large number of learnable parameters in its hash tables, specifically (feature size 2) x (# of entries 2^19 = 524K) x (multi-resolution levels 4). In our experiments, we used a reasonable network capacity. For clarification, the input size of 3 is mapped to (levels 4 x features 2 = 8) by HashGrid, and then the decoding network maps 8 -> 16 -> 16 -> 1 with 3 mapping layers (as described in Sec. 6).
**Q1. In Theorem 4.5, is it true that \tau is a trilinear function?**
- Yes. We defined $\tau$ as a trilinear function in Definition 4.3 in Sec. 4.1 (the previous section). In Definition 4.1, $\psi$ represents trilinear interpolation. In Definition 4.3, we describe $\tau$ as "using the trilinear interpolation of spatially nearby learnable vectors on a three-dimensional grid." relating to $\psi$ for instantiation. We will revise to have a formal definition of $\tau$, enhancing clarity.
**Q2. Could you clarify Theorem 4.5? My best understanding is that it states that the only trilinear function that satisfies the eikonal equation is a plane...**
- Theorem 4.5 states that a trilinear function (within a piecewise trilinear region), which satisfies the eikonal equation, represents a plane. We do not argue that **only** trilinear function which satisfies the eikonal equation is a plane. We choose the trilinear networks using HashGrid since 1) it effectively handles the spectral bias, 2) it is one of the widely-used methods in NeRFs, and 3) it has a planarity opportunity using the eikonal constraint. Notice that we provided detailed proof in Theorem D.5.
**Q3. In Line 221, do you perhaps mean $\|\tilde{v}(\mathrm{x})\| \le \epsilon$?**
- Yes. Thank you for pointing out the typo.
**Q4. Why do you use the 256^3 marching cubes mesh as a ground-truth instead of using the input mesh on whose SDF you train?**
- To eliminate Bayes error from underfitting. This work is solely focused on the extraction itself, not on surface learning. Notice that we used $512^3$ for large models, and confirm that it were sufficiently high resolution for meaningful evaluation.
**Q5. Can you comment on the extraction of the volumetric mesh?**
- The question seems unclear to us; but we would like to explain that before the skeletonization, the initial volumes (a cube of interest) are partitioned by grid marks (grid planes) and folded hypersurfaces. In tropical geometry, these are known as tropical hypersurfaces (Itenberg et al., 2009), which are sets of points where the function is not linear (or zero). Before the skeletonization step, we identify all non-linear boundaries and select the zero-set vertices and edges for our efficient extraction. **Please refer to Fig. C in the rebuttal PDF** for a visual aid.
**Limitation**
- Thank you for pointing that out. We want to draw your attention to Appendix B.1, which discusses "Satisfying the eikonal equation and approximation." In this section, we examine the effectiveness of HashGrid in satisfying the constraint through the piecewise allocation of hash table entries and their locality. We will integrate this discussion into the Limitations section.
**Errors and suggestions**
- We sincerely appreciate your effort in providing a detailed list of errors and suggestions. We will incorporate your feedback to enhance the clarity of our work. Note that a discussion of HashGrid ( \[12\]) has been addressed above (model size).
---
Rebuttal 2:
Comment: *Below are our comments on your questions, demonstrating our commitment to clarifying our paper.*
---
**The motivation of discussing the normals in 4.3 is unclear.**
- To define a face, we use a list of vertex indices. To determine the normal vector of the face (considering the ambiguity of two opposite directions), we implicitly rely on the order of the vertex indices as described in the paper.
**Line 50: "This provides a theoretical exposition of the eikonal constraint" is unclear**
- This indicates Thm 4.5 and Coro. 4.6 which stated "that within the trilinear region, the hypersurface transforms into a plane" under the eikonal constraint.
**Line 59: you should give some examples of where marching cubes are used in a learning pipeline and explain what you mean by redundancy (probably flat regions which are described by many triangles?)**
- MeshSDF (Remelli et al., 2020) will be the example. Yes, your answer is right.
**Line 110: "for rather discussions as an SDF" is entirely unclear**
- An SDF has a single scalar output as a definition. In the manuscript, we later discuss the networks as an SDF in Sec. 4, specifically Def. 4.3. (Line 175). We will make sure to clarify this as you asked.
**Lines 112-114 are very unclear**
- We hope **Fig. C (c) in the rebuttal PDF** helps your understanding. Our method performs a single pass over neurons to find tropical hypersurface (non-linear boundaries; refer to **Fig. B in the rebuttal PDF**).
**Definition 4.1: what is $\psi$? You should also define/explain $F$ before using it.**
- In Def. 4.1, $\psi$ is a trilinear interpolation function given weight $\mathrm{w}$ and features $\mathrm{H}$. This is implemented using the HahsGrid as $\tau$ in Def. 4.3. In Lines 172-173, we explained that "they transform the input coordinate $\mathrm{x}$ to $\mathrm{w}$ with a multi-resolution scale and find a relative position within a grid cell." The features $\mathrm{H}$ are the hash table entries derived from the hashing function, where the input to the hash function is a given grid vertex and the output is the index in the hash table. We will provide a clear explanation in the revision.
- $F$ is the output dimension of $\tau$, positional encoding module. (Line 170)
**Line 259: Can you be more precise with the objective? The goal is to get a boundary mesh of the zero-set of the SDF.**
- By the definition of SDF, the zero-set of the SDF naturally indicates surfaces. But we respect your concern and will revise it.
---
Rebuttal 3:
Comment: Dear Reviewer FmNY,
As we approach the end of the reviewer-author discussion period, we want to ensure that our feedback has sufficiently addressed your major concerns. We recognize the importance of this opportunity to address the reviewers' raised concerns, and we are committed to providing any necessary clarifications. Please let us know if you have further questions or need additional clarification.
Best regards,
On behalf of all authors
---
Rebuttal 4:
Comment: I thank the authors for their detailed response. While some of my minor concerns are addressed, the presentation and clarity remain my main concern.
**Figure C**
I think this schematic is very misleading. The final decision boundary (0 level-set, red) is not composed of some intermediate neuron decision boundaries in the way you show. I would strongly encourage you to visualize some intermediate steps from your method accurately depicting a realistic subdivision.
If this is not possible in the available time due to how your code is, a simple way to visualize the partition is to pass the coordinates of the pixels through the NN and plot the pre-activations of the ReLU neurons (e.g. Figures 2 or 7 in Hanin, B. et al, Deep ReLU networks have surprisingly few activation patterns, NeurIPS 2019.)
**MC + QED**
I appreciate the additional MC+QED experiment, but I slightly question the setup - this is reported for the _Large_ model, which is mainly discussed in the Appendix (Table 6). There, your method has better CD than any MC, so of course QEM will perform better on your mesh. I would appreciate if you could report MC+QEM for the _Small_ (Table 1) which is used in the main paper and where finer MC meshes (128 or 196) have better CD but worse vertex counts.
**Theorem 5**
Could you please clarify Line 559 in the proof? Do you perhaps mean for _all_ $\mathbf{x}$?
**Model size**
My criticism is not that the size itself it small, but rather that the method is assessed just for a _single size_. E.g. Fig. 10 of instantNGP [12] reports multiple widths and depths of the MLP, so it would be reassuring to see how the quality/runtime behaves for models of size other than 3x16.
**Comparison to CPWA**
I understand the motivation of the spectral bias and positional encoding. However, because you are treating this topic from the perspective of surface meshes, contrasting the meshes obtained with and without an encoding is in my view a worthwhile consideration. Describing and explaining the qualitative and quantitative differences between these meshes could add to a unique interpretation of positional encodings and strengthen the unique aspect of your work. The whole point of increasing the spectral bias is to represent higher frequency components - are these present in the mesh relative to the plain ReLU NN? Training a Standford bunny with ReLU certainly isn't hard (the method you reference in [4] and the code therein already provides this). I do not expect the authors to perform the experiments and the analysis in the short available time and this is not critical for my current assessment, but as a discussion point, I do maintain that this could add to your unique work overall.
**Q4**
Thank you, this clarifies my concern on the ground-truth.
---
Rebuttal 5:
Comment: **Figure C**
We would like to clarify a potential misunderstanding. While Figure C is a schematic figure, we believe it accurately illustrates the characteristics of our method. The final decision boundary (0 level-set, shown in red) *may exclude* some intermediate neuron decision boundaries, as demonstrated. In Definition 3.1 (p.3), we introduced the concept of the **tropical** hypersurface, corresponding to the nonlinear boundaries formed by all neurons (including an SDF). We then referred to Proposition 3.3 (Decision boundary of neural networks), which stated that the decision boundary is a tropical hypersurface; however, the reverse is not true. Not all points on the tropical hypersurface constitute the decision boundary at the zero level-set. We formally denoted this relationship as $\mathcal{B} \subseteq ...$, rather than as $\mathcal{B} = ...$ in Proposition 3.3. And, in Lines 112-113, we stated that "the preactivations, $\nu_j^{(i)}$ are the *candidates* shaping the decision boundary." For more rigorous explanations on this matter, please refer to Proposition 6.1. of Zhang et al. (ICML 2018) "Tropical Geometry of Deep Neural Networks."
Notice that there is one more detail. The bottom line of the bunny is from a grid line, not from neuron's folded hypersurface. Geometrically, the grids of the HashGrid is non-linear boundaries which efficiently divided the Euclidean space into a lattice structure. To achieve this via only neurons, we need more neurons, which may explain why HashGrid (Muller et al., 2022) does not need many neurons or layers in decoding networks.
Your constructive feedback has highlighted areas that may lead to misunderstandings, and we are fully committed to improving the clarity of our presentation.
Thank you for suggesting the figure from Hanin & Rolnick (2019). We will consider visualizing our concepts using this reference. Sadly, we cannot update the rebuttal PDF in the reviewer-author discussion period.
**MC + QED**
Unfortunately, the *Small* models have coarser grid intervals in the HashGrid, which puts them at a disadvantage compared to the *Large* models in terms of our analytical approximation method. (Denser grid intervals result in smaller errors when approximating curved hypersurfaces.) Additionally, using the *Small* models is less representative of real-world scenarios than the *Large* models.
**Theorem 5**
Yes, that appears to be an unintended typo. We will correct it.
**Model Size**
Referring to `3x16` could be misleading. Our model has three layers, structured as 3 (coordinates) -> 8 (hash grid) -> 16 -> 16 -> 1.
One limitation of this ablation study is that model size impacts fitting, affecting the GT mesh (MC256 for the pseudo-GT). However, the major model capacity comes from the hash tables (explained above in the initial author feedback). Therefore, the changes in CD are small. For completeness, we provide the following tables:
For the Bunny Small models,
| Layers | Width | Vertices | CD (x 1e-6) | Time (sec) | Note |
|--------|-------|-------------|-------------|------------|------------|
| 3 | 4 | 4547 | 666 | 0.12 | |
| 3 | 8 | 4661 | 750 | 0.19 | |
| 3 | 16 | 4628 | 724 | 0.65 | Ours |
| 3 | 32 | 4567 | 754 | 0.57 | |
| 3 | 64 | 5588 | 759 | 2.86 | |
| 2 | 4 | 4565 | 738 | 0.11 | |
| 2 | 8 | 4863 | 768 | 0.13 | |
| 2 | 16 | 4566 | 734 | 0.12 | |
| 2 | 32 | 4577 | 754 | 0.22 | |
| 2 | 64 | 4611 | 739 | 0.50 | InstantNGP |
Notice that the model with layer=3 and width=64 took 2.86, because of the large number of intermediate (not final) vertices. The time complexity is more sensitive to the number of intermediate vertices than to the number of layers or the width.
W.r.t. the number of layers:
| Layers | Width | Vertices | CD (x 1e-6) | Time (sec) | Note |
|--------|-------|-------------|-------------|------------|------------|
| 2 | 16 | 4566 | 734 | 0.12 | |
| 3 | 16 | 4628 | 724 | 0.65 | Ours |
| 4 | 16 | 4569 | 759 | 0.51 | |
| 5 | 16 | 4579 | 749 | 0.59 | |
The number of layers has a limited impact on runtime, especially for 3 to 5.
**Comparison to CPWA**
I appreciate your insight into how it could enhance the overall work. However, we have found that the experiments require a major code revision due to integration issues with HashGrid. Although, we will consider this in our revision.
---
Rebuttal 6:
Comment: I thank the authors for the clarification. I appreciate the additional table reporting the runtimes for different model sizes. I would encourage you to include this in the appendix.
Given that some of my concerns have been addressed and given the overall commitment of the authors to improve the work, I will increase my rating to borderline accept.
However, my main concern remains with potentially publishing misleading information. I urge the authors to take time to consider the subdivision process and redo Figure C for the final version. Currently, Figure C *is* wrong - in the generic case, the final decision boundary does not overlap with intermediate neuron decision boundaries. Each neuron in a ReLU network subdivides some existing pieces. The composition with the positional encoding changes the shape of these pieces but not the fact that these are still subdivided in a generic manner.
Please consult existing works describing this, e.g.
- Figure 4 in Humayun et al. SplineCam: Exact Visualization and Characterization of Deep Network Geometry and Decision Boundaries. CVPR 2023
- Figure 3 in Jordan et al. Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes. NeurIPS 2019.
You can also verify this yourself using the approach I described previously.
Similarly, you cannot simply state that _Small_ performs worse so you are just not going to report the results. | Summary: The paper presents a theoretical and practical framework to extract meshes from piecewise trilinerar networks using hash grid representation. They further show that hypersurfaces within the trilinear regions become planes under the eikonal constraint. A method for approximating the intersection between 3 hypersurfaces is proposed (approximated as 2 hypersurfaces and a diagonal). Evaluation is performed against marching cubes resolutions using chamfer distance, chamfer efficiency and angular distance.
Strengths: - Thorough theoretical analysis
- Proof of concept practical evaluations show improvement over naive marching cubes at similar complexity of meshes
Weaknesses: - While there are some improvements over marching cubes, the improvements are not quite substantial, and come at the cost of a higher runtimes
- No comparisons to any alternate approaches over marching cubes are provided -- e.g. marching cubes + QEM decimation etc.
- No rigorous qualititative comparisons are provided, which could be important since chamfer does not always capture the shape quality
Technical Quality: 3
Clarity: 2
Questions for Authors: Would be nice if the authors can add some qualitative comparisons and also against more than just marching cubes. Further current evaluation is only performed on a handful of meshes, would be nice if an evaluation on a bigger dataset can be performed
Confidence: 1
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review and constructive feedback. We explain our rationale for the evaluation and additional note for the angular distance we used in quantitative evaluations.
**W1. While there are some improvements over marching cubes, the improvements are not quite substantial, and come at the cost of a higher runtimes**
- Our method is a white-box approach that identifies apex vertices with a reasonable eikonal constraint, showing the improved Chamfer efficiency (Please refer to **our global response G.1** on this matter) across multiple benchmarks for six objects and both small and large models. After submission, by utilizing PyTorch's `masked_scatter_`function to identify intersecting edges and others, we significantly reduced the processing time from 53.5 seconds to approximately 3 seconds for the Dragon (Large) model, achieving similar time spent comparable to MC. This result highlights the potential for further improvements in efficient implementation. We have also provided a theoretical analysis of the time complexity (see Appendix B), which suggests that our approach is theoretically optimal. The code will be publicly released upon acceptance.
**W2. No comparisons to any alternate approaches over marching cubes are provided -- e.g. marching cubes + QEM decimation etc.**
- Thank you for your suggestions. Please refer to our **global responses G.2 and G.3** and the corresponding **new Figures A, D, E, and F in the rebuttal PDF**. We provide supplementary experiments to compare with MT, NDC, and the QEM scenario.
**W3. No rigorous qualitative comparisons are provided, which could be important since chamfer does not always capture the shape quality**
- **Please refer to our Figures A & F in the rebuttal PDF, which show detailed extracted vertices and edges and normal errors for a visual explanation.** **Fig. A provides rigorous visualization with enlarged meshes and normal directions are visualized with plasma colors.**
In addition to the Chamfer distance, we present the angular distance in Table 5 in the Appendix. This measurement, derived from the extracted normal vector, a direction indicated by the normalized Jacobian of the SDF function, assesses how much the normal vectors deviate from the ground truth (detailed in Sec. 6). As anticipated, our method effectively estimates the faces using a relatively small number of vertices compared to the sampling-based approach. This trend is consistently observed across various objects. We believe this supplements the limited aspect of the chamfer distance.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer 1zQb, as we approach the conclusion of the reviewer-author discussion period, do you have any remaining questions, or have your concerns been fully addressed? Your confirmation and reconsideration of your rating would be greatly appreciated. | Summary: The work addresses the problem of exact iso-surface extraction given an implicit surface represented by a neural network. The idea behind this line of work is to closely analyze the architecture of the neural network to efficiently extract the true (or at least an approximate of) learned iso-surface. Previous work titled 'Polyhedral Complex Extraction from ReLU Networks using Edge Subdivision' develops the analysis for ReLU networks. However, many newer works use local latent codes to achieve better reconstruction quality. These latent codes are usually obtained via trilinear interpolation from a grid of learnable latent codes. The previous method does not trivially apply to iso-surface extraction in this case. The current work continues the analysis for ReLU networks that now have trilinear positional encoding as input. The paper first shows that under the eikonal constraint, trilinear regions decode into flat planes. Then, it derives a polynomial defining the intersection of two hypersurfaces. With this, the paper suggests an algorithm for zero-level set extraction given the trained neural network and the positional encoding.
Strengths: * The paper extends previous work from “Polyhedral Complex Extraction from ReLU Networks using Edge Subdivision” to architectures with positional encoding (PE) implemented via trilinear interpolation. This is a very reasonable extension, as many implicit representation methods in practice use grids of local embeddings.
* Theorem 4.5 (and Corollary 4.6), which prove that the trilinear regions of PE decode into flat planes under the eikonal constraint, seem surprising. If true, this is a great insight into the performance of these models.
* The suggested algorithm can extract a good approximation of the learned surface and matches the accuracy of Marching Cubes at much smaller resolutions.
* The paper is self-contained and presents interesting insights into what neural implicit representation methods are capable of reconstructing.
Weaknesses: Here are the main concerns I found in the paper in order of importance (with the first being the most important):
* I found the paper very hard to understand. I am quite familiar with neural implicit representations (though not with Polyhedral Complex Extraction), but it still took me at least three full passes before I started understanding the general idea of the paper. The paper is self-contained and formal, which is great, but I would consider simplifying the language for the sake of understandability.
* The work is positioned as a better way of iso-surface extraction, yet there is no visualization of extracted meshes in the main paper. It is very hard to say if the mesh extracted with the proposed method is actually better than the ones extracted with Marching Cubes. In my view, MC64 seems to be actually closer to MC256 than the mesh extracted with 'ours' in Figure 7. A much more detailed visual comparison of extracted meshes seems necessary in the main paper given the claims.
* The method is shown to work only with relatively small neural networks: the experiments are performed on a model with three layers and a hidden size of 16, while DeepSDF, for instance, operates on eight layers, each with a width of 512. It seems that it can be used only as a source of understanding the behavior of these architectures, not as a practical method of iso-surface extraction.
* Comparison with only Marching Cubes seems very limited; it would be great to see a comparison with 'Analytic Marching' as well.
I will consider adjusting the final rating if weakness #2 and question #1 are addressed in the rebuttal
Technical Quality: 3
Clarity: 2
Questions for Authors: * I am quite confused about the following (probably due to my limited understanding of the math in the paper): A standard ReLU network with a constant latent code as input (i.e., DeepSDF) can be seen as a trivial trilinear positional encoding (i.e., the entire domain is trilinear). Given that such networks are usually trained to satisfy the eikonal constraint, why aren’t they limited to always produce flat plane reconstructions, given Corollary 4.6?
* It seems that a natural measure of how accurately the algorithm extracts the learned iso-surface would be:
1. Sample N points on the surface of the extracted mesh
2. Predict SDF on each of them
3. Calculate sum of squared SDF values
it should be 0 if we extracted the learned surface precisely. Would be interesting to see this metric. Can we confirm Corollary 4.6 with it?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The limitations are well discussed. I would also add a discussion on the depth and width of the neural networks for which the method is practical to use.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and for pointing out the effective rebuttal points. We appreciate your time and effort in going through our work multiple times.
**W1. Simplification of Language**
We are committed to making our paper more accessible. Based on your feedback, we will revisit the language and structure to ensure that the key ideas are presented as clearly as possible. Specifically, we will: 1) Simplify complex sentences and terminology where possible. 2) Add a brief introductory section that explains Polyhedral Complex Extraction in simpler terms before delving into the technical details. Please refer to Fig. 1, and Alg. 1 & 2 in the Appendix for visual and textual aids illustrating core concepts. We believe our new **Figure B & C (rebuttal PDF)** would also help readers.
**W2. Pitfall to Over-Complex**
There is a *risk* that MC64 appears more similar to MC256. This is because we used small networks in this visualization to highlight the limits of MC with a limited number of vertices and edges in GT (simulating flat or high curvature/edged regions). Here, MC64 tends to oversample the vertices. When examining the nose of the rabbit in MC256, the same bright yellow color can be observed in areas where our method extracts flat facets. In contrast, MC64 oversamples around the edges, creating an over-smooth surface where it should not. For instance, when a cube is tilted at a 45-degree angle, its vertices may not be sampled as they lie within the sampling grids, leading to the creation of a multi-facet polytope. This is an innate weakness in sampling-based methods. Additionally, inaccuracies in estimating normal vectors from it will lead to incorrect light reflection outcomes. **Please refer to Figure A (rebuttal PDF)**, which shows detailed extracted vertices and edges and **Fig. F** for the normal errors in the Large models, for a visual explanation.
**W3. Computational Complexity**
As mentioned in Section 4.2 and detailed in Appendix B, the time complexity of our method is linear with respect to the number of vertices, making it optimal for extraction. After submission, by utilizing PyTorch's `masked_scatter_` function to identify intersecting edges and others, we significantly reduced the processing time from 53.5 seconds to approximately 3 seconds for the Dragon (Large) model, which is in between MC and MT (Marching Tetrahedra), much closer to MC. We have provided both a theoretical analysis of the time complexity (Appendix B) and practical benchmarks using the Large models to validate our findings. *The code will be publicly released upon acceptance.*
Moreover, we emphasize HashGrid allows us to use small decoding networks with hash tables of large parameters.
Networks using HashGrid (Muller et al., 2022) typically have **a small hidden size of 64 and only one hidden layer**, even in implicit neural surface learning, NeuS2 (Wang et al., 2023). This is because HashGrid contains a large number of learnable parameters in its hash tables, specifically (feature size 2) x (# of entries 2^19 = 524K) x (multi-resolution levels 4). In our experiments, we used a reasonable network capacity. For clarification, the input size of 3 is mapped to (levels 4 x features 2 = 8) by HashGrid, and then the decoding network maps 8 -> 16 -> 16 -> 1 with 3 mapping layers (as described in Sec. 6).
**W4. Additional comparisons**
Unfortunately, Analytic Marching is not feasible for our benchmarks due to its high time complexity. Specifically, its time complexity is $\mathcal{O}((n/2)^{2(L-1)} |\mathcal{V_P}| n^4 L^2)$, which grows exponentially with the number of layers L and the width n of the networks, where $|\mathcal{V_P}|$ represents the number of vertices per face (Section 4.1 from Lei and Jia, 2020). In contrast, our method has a linear time complexity with respect to the number of vertices, as discussed in Section 3.2, allowing it to avoid visiting every linear region exponentially growing. Instead, we would like to direct you to **our global response G.2**, which includes supplementary experiments comparing our method with MT and NDC.
**Q1. Why aren’t they (DeepSDF) limited to always produce flat plane reconstructions?**
As you might know, HashGrid (Muller et al., 2022) includes learnable parameters, the hash tables. The eikonal loss causes the hash entries to form a flat plane within trilinear regions. We denote the entries as $\mathrm{H}$, which are learning parameters, in Definition 4.1 in Section 4.1. For this reason, the constant latent codes cannot be applied with Corollary 4.6. However, since a standard ReLU network (with a constant latent code) is piecewise linear, it may produce flat planes in linear regions. Nonetheless, we want to emphasize the spectral bias (Tancik et al., 2020) discussed in the Introduction, which arises without appropriate positional encodings, and it serves as our motivation. As a result, DeepSDF (Park et al., 2019) may be limited to simpler shapes and suffer from this spectral bias.
**Q2. Quantitative metric to validate Corollary 4.6**
We appreciate the proposed metric; however, we must point out that our existing metrics already measure the suggested aspect. Firstly, using the Chamfer distance, we sample points on the surface of the extracted mesh and calculate the distance to the closest point (which is defined as the SDF value) on the ground truth mesh. This mesh is derived from the over-sampled MC, reflecting the learned surfaces of the networks. We believe this is essentially the same metric (under sufficient samplings and well-fitted SDF networks) as the one you suggested.
Furthermore, we want to emphasize Equation 13 and Figure 3, which illustrate how flatness changes with respect to the weight of the eikonal loss using the flat constraints identified in the proof of Theorem D.5. We have confirmed that the eikonal loss is an effective method to ensure flatness and evaluate the extracted mesh using the Chamfer distance.
---
Rebuttal Comment 1.1:
Title: Follow-Up on Rebuttal Feedback
Comment: Dear Reviewer MSAr,
Thank you for your thorough review and the constructive feedback you provided.
We would like to kindly inquire whether our responses to Weakness #2 and Question #1 have addressed your concerns sufficiently, as you mentioned you might consider adjusting the final rating based on our rebuttal. Given the limited time for the author-reviewer discussion period, we would also appreciate it if you could let us know if there are any additional questions or clarifications needed.
Thank you again for your time and effort.
Best regards,
On behalf of all authors
---
Rebuttal 2:
Comment: Regarding **Fig. A** (closer looks at bunny's nose), we understand you are referring to the small facets along the left-side of the nose line in the highlighted region. We assume these narrow facets are embedded in the facets of MC256, while MC64 fails to capture the left and right edge of the nose and represents this with more facets cutting these two sides of edges (*not aligned* with nose lines and rather assign more facets in here).
Regarding the quantitative evaluation, we are happy to report the predicted SDF on the surface as you suggested. We randomly sampled 100K points on the surface of pretrained networks (a bunny Large model) and calculated the *average* of squares for predicted SDF (model's outputs) on the surface. As we expected, the predicted SDFs on the reconstructed surface converge to zero as the weight of the eikonal loss increases, and the number is close to zero $\approx 3e-8$. Note that, for a large model, fine-grained trilinear regions from the dense grid marks (from the more dense multi-resolution grids) allow relatively small errors of $97 \times 1e-9$ without the eikonal loss, although the eikonal loss of $1e-2$ further decreases to $31 \times 1e-9$.
| Eikonal Loss Weight | Average of Squared SDFs |
|-----------------------|----------------------- |
| 0 | 97 x 1e-9 |
| 1e-4 | 91 x 1e-9 |
| 1e-3 | 68 x 1e-9 |
| 1e-2 | 31 x 1e-9 |
Plotting the predicted SDF values on the reconstructed surfaces would be beneficial. We could visualize this in a manner similar to **Fig. F**, which illustrated the normal errors on the surfaces. We anticipate that regions with high curvature will be particularly noteworthy. Notice that, sadly, we cannot update the rebuttal PDF during the author-reviewer discussion period.
Apologies for overlooking your insights, and we hope this response addresses your concern.
---
Rebuttal Comment 2.1:
Comment: Thank you a lot for such a fast response.
Do you happen to also have these numbers for MC64 and MC256?
---
Rebuttal 3:
Comment: We understand that, for the extracted mesh using MC64 or MC256, we can measure the average of predicted SDFs and how much they deviate from the learned surface of networks.
In the same procedure, the corresponding MC numbers are as follows:
| MC Sample | Average of Squared SDFs |
|-|-|
| 64 | 2308 x 1e-9 |
| 256 | 66 x 1e-9 |
We confirmed that the numbers are higher than the mesh using our method (31 x 1e-9).
---
Rebuttal Comment 3.1:
Comment: Thank you.
I don't have any more questions and will adjust my rating. I think the paper is interesting and should be accepted, given the promised changes in the final version.
---
Reply to Comment 3.1.1:
Comment: Thank you for your thoughtful and detailed feedback. We truly appreciate the time you invested in reviewing our manuscript. | Rebuttal 1:
Rebuttal: Please refer to the **rebuttal PDF (below PDF button)** attached for **Figures A-F** mentioned in the author's feedback. This PDF is high-resolution; please enlarge the PDF if you needed.
* **Fig. A.** Detailed mesh visualizations for Ours, MC, MT, and NDC.
* **Fig. B.** A visual explanation for a tropical polynomial and how a space is divided by it.
* **Fig. C.** A schematic diagram for 2D subdivision of our method.
* **Fig. D.** A plot of chamfer distances for Ours, MC, MT, and NDC.
* **Fig. E.** A plot of chamfer distances for Ours and MC applying QEM (gradual mesh simplification).
* **Fig. F.** Visualization of face normal errors on the surfaces.
----------------------------------------------------------------------
In summary, our work received the following positive reviews:
* **Motivation**:
- "A very reasonable extension" (MSAr)
- "Well motivated and holds a lot of promise," "first attempt" (FmNY)
* **Findings and Insights**:
- For Thm 4.5 (Coro 4.6), "seem surprising," "great insight" (MSAr)
- "Theoretical aspect of the work is in my view the most clear contribution" (FmNY)
- "The theoretical framework is interesting and it is a good application of tropical algebra" (1oW4)
* **Results**: Proved mesh efficiency (MSAr, 1zQb)
* **Presentation**:
- "Self-contained" (MSAr)
- "Thorough theoretical analysis" (1zQb)
- "Mathematical notation is good overall, with good choices for the symbols and their use" (1oW4)
----------------------------------------------------------------------
We recognize that, as an interdisciplinary work bridging algebraic geometry and neural 3D, we respect the raised issues and have carefully considered them, understanding the need for thorough analyses and clear explanations. We summarized our global feedback on the core raised issues:
**G1. Why does the chamfer efficiency matter?** (1zQb, FmNY)
- Although we found our method achieved the *lowest* chamfer distance in the Large models (see Tab. 6-7 in Appendix and **Fig. D**), we argued that the chamfer efficiency is *our highlight* since:
1) sampling-based methods suffer exponential memory/compute costs to find optimal,
2) for light-weight deploy of meshes (e.g., WebGL),
3) and to get accurate normals for light modeling (see angular distance in Tab. 5 and **Fig. F**).
**G2. Additional Comparisons to Alternate Approaches (MT, NDC)** (MSAr, 1zQb, FmNY, 1oW4)
- Marching Cubes (MC) remains the de facto sampling-based method. In contrast, our method is a white-box method grounded in the theoretical understanding of trilinear interpolation and polyhedral complex derivation. Here is **our rationale for the evaluation**:
1. Analytical approaches for CPWA functions are limited by their inability to effectively manage spectral bias and their exponential computational cost, which struggles with exponentially growing linear regions (infeasible to run our benchmarks).
2. We also exclude optimization-based methods as they typically rely on MC or its variants initialization in their pipelines (our method can also be integrated into them), such as MeshSDF (Remelli et al., 2020), Deep Marching Tetrahedra (Shen et al., 2021), and FlexiCubes (Shen et al., 2023). Additionally, these methods result in prolonged computational costs.
- **MT & NDC results:** To be faithful to the reviewers' requests, we provide both **rigorous qualitative** (**Fig. A & F**) and **quantitative** (**Fig. D**) analyses on **Marching Tetrahedra (MT)** and **Neural Dual Contour (NDC, Chen et al., 2022)** in the rebuttal PDF. MT is a popular variant of MC that utilizes six tetrahedra within each grid to identify intermediate vertices. NDC is a data-driven approach (a pretrained model) that uses Dual Contour. In short, MT efficiently requires SDF values but is less efficient regarding the number of vertices extracted. (MT is better than MC in the same resolution, producing more vertices, though.) NDC faces a generalization issue for a zero-shot setting, producing unsmooth surfaces. We used the code from Deep Marching Tetrahedra (Shen et al., 2021) for MT and the official code of NDC. And PyMCubes for our *modernized* MC library.
**G3. Applying QEM** (1zQb, FmNY, 1oW4)
- QEM is a popular mesh simplification algorithm, it could potentially improve both MC and our method for efficiency, which is validated in **Fig. E**. Overall, mesh simplification favors our method, as shown. Notice that QEM entails a higher computational cost, at least $\mathcal{O}(n \log n)$ (with $n$: the number of vertices). Additionally, these methods can cause shape deformation or holes, sensitive to hyperparameters. We used MeshLab's Quadric Edge Collapse Decimation with the default settings to halve the number of vertices for each step.
**G4. Better Presentation** (MSAr, 1oW4)
- As an interdisciplinary work, from tropical geometry to deep learning to implicit neural surface learning, we respond that the content needs visual aids to illustrate core concepts and provide clear explanations for inexperienced readers (1oW4). We are committed to making our paper more accessible. Specifically, we will: 1) simplify complex sentences and terminology where possible. 2) add a brief introductory section that explains Polyhedral Complex Extraction in simpler terms before delving into the technical details. For those who might miss it, please refer to Fig. 1, and Alg. 1 & 2 in the Appendix for visual and textual aids illustrating core concepts and procedures. Supplementarily, our new **Figure B & C (rebuttal PDF)**, B) a visual explanation for a tropical polynomial and how a space is divided by it, and C) a schematic diagram for 2D subdivision of our method would also help readers.
Pdf: /pdf/41e1a7a73752b0a656c7cd1bab99e71b3463e854.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
No Free Delivery Service: Epistemic limits of passive data collection in complex social systems | Accept (poster) | Summary: This work shows that the classic paradigm of training and testing in ML has validity flaws that makes it unreasonable to generalize about performance from test-set performance. They use the no free lunch theorems to theoretically demonstrate this, and then show empirical lens on the MovieLens benchmark dataset for recommender systems. They show that multiple models are able to fit the observed data but still perform arbitrarily well on unobserved data.
Strengths: The paper is very well-written overall. It tackles a very important question that has many implications given how popular the train-test paradigm is.
Weaknesses: I’m ultimately not very sure what the prescriptive argument of this work is. I understand that there is value in work for pointing out a problem, but the issues with generalizing performance of ML systems into the real world is widely known and empirically shown (as demonstrated by the presence of the WILDS dataset [https://wilds.stanford.edu/datasets/]. This work provides a theoretical grounding for this, but from the abstract’s final sentence, I expected more implications rather than just gestures to participation and open science. I would have liked to see implications which demonstrate the strengths of approaching this problem from a theoretical perspective, as opposed to just the empirical evidence we have of it.
There is also increasingly a line of work on the limitations of using observational data (as opposed to data from, e.g., randomized controlled trials, as well as performative prediction, which I felt this work could connect to more, since that seems to be part of the premise of the issue with passive data collection.
A smaller point of confusion, I was unclear sometimes whether you were referring to “benchmark” only as the evaluation dataset or also the training dataset.
Technical Quality: 3
Clarity: 4
Questions for Authors: Above, in weaknesses.
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback. I agree that it is important to clarifying the prescriptive argument and the role of participatory methods. I hope that the response to all authors as well as the below additional clarifications can alleviate your concerns. Since everything is addressable via minor clarification updates, I hope this will also allow you to raise your score and confidence.
*I’m ultimately not very sure what the **prescriptive argument** of this work is. I understand that there is value in work for pointing out a problem, but the issues with generalizing performance of ML systems into the real world is widely known and empirically shown (as demonstrated by the presence of the WILDS dataset [[https://wilds.stanford.edu/datasets/]](https://wilds.stanford.edu/datasets/%5D). This work provides a theoretical grounding for this, but from the abstract’s final sentence, I expected more implications rather than just gestures to participation and open science. I would have liked to see implications which demonstrate the strengths of approaching this problem from a theoretical perspective, as opposed to just the empirical evidence we have of it.*
* Thank you for highlighting this issue, I agree that this is an important aspect. There are three aspects in your question that I would like to separate:
* Prescriptive argument: The paper's theoretical results provide indeed *actionable* insights to improve data collection via the $k$-core condition for validity. In short, we would want to collect data such that either increases the $k$-connectivity or the size of the $\text{rank}(f)$-subgraph. Importantly, we can compute from a given sample graph where to collect data points. Please see the response to all authors for a detailed discussion of this.
* Lack of justification versus impossibility results: If I understand your comment correctly, the key difference between the known results and issues that you are listing is that, to the best of my knowledge, these are based on a "lack of justification", i.e., that we know that our usual guarantees do not apply. In contrast, this paper provides much stronger insights in form of rigorous impossibility results. The insights that can be gained from such results are much more substantial and provide a path forward to improving the situation, as in the $k$-core condition above (while just noting a lack of justification can not). Please see the response to all authors for a detailed discussion of this (including also the different contributions of sufficient and necessary conditions).
* Participatory: I agree that the current discussion of this is insufficient, thanks for pointing me to it. The argument for participatory methods stems from the sheer amount of data that would need be collected in a *targeted* way according to the $k$-core scheme above. While we are currently preparing a paper for submission that introduces a novel method for this purpose, I believe that it is beyond the scope of this paper to also cover this aspect in detail, (e.g., it would need to introduce additional concepts related to mechanism design, game design, economics, and the efficient computation of the $k$-core objectives). However, I will add a further discussion why participatory data collection is favorable for the setting of large-amount of data + targeted collection.
*There is also increasingly a line of work on the **limitations of using observational data** (as opposed to data from, e.g., randomized controlled trials, as well as performative prediction, which I felt this work could connect to more, since that seems to be part of the premise of the issue with passive data collection.*
* I agree that that this is an important aspect. However, I would point out that the paper actually discusses connections to work that goes beyond observational data, i.e., to counterfactual and causal estimators (e.g., see lines 261-265 in the submission). To the best of my knowledge this is also the first impossibility result for such estimators in settings as considered in this paper. As such, the theoretical insights provide, in my opinion, non-trivial insights into the limitations of these popular approaches. Thank you for pointing me to this. I believe it is an important contribution of the paper and will discuss it more clearly in the updated version.
*A smaller point of confusion, I was unclear sometimes whether you were referring to “benchmark” only as the evaluation dataset or also the training dataset.*
* I agree that this is not clear from the current presentation. Part of the issue comes from the fact that in (k-fold) cross-validation part of the training set act as validation set. I will clarify this (maybe in the appendix due to space constraints).
---
Rebuttal Comment 1.1:
Comment: Thank you for clarifying on the prescriptive component, that is helpful! My review remains the same as a borderline accept.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your response, I really appreciate it. I'm glad that clarifying the prescriptive component was helpful, especially since it was my impression that this was your main concern and that you are otherwise very supportive of the contributions and potential impact of the paper. Please let me know if I can help to clarify any further questions or concerns. | Summary: Building advanced and very large general-purpose ML models using extremely large datasets from sources such as the Internet has gained a lot of recent attention.
Particularly, when the training set is sampled from a distribution S while the target distribution is T, the paper introduces the notions of (\epsilon,\alpha)-validity and (\epsilon,\alpha)-invalidity for inference validity and defines test validity accordingly.
The paper shows that there is "no free delivery service" of data that allows inference/test validity on a global scale for complex social systems.
More importantly, the paper provides the metrics and necessary conditions that limit the scope of AI in complex social systems.
Strengths: S1. The paper studies a problem of immense importance: with the rapid growth of advanced ML models such as LLMs and their versatile usage across a wide range of tasks that impact societies and human lives, establishing the necessary conditions and checks and balances to limit their scope is vital. Formally introducing such metrics and conditions, this paper is an interesting step in that direction.
S2. The paper follows a formal and careful writing scheme, which makes it easy to follow and fun to read.
S3. The proposed validity framework and the inference and test validity concepts provide metrics for evaluating the validity extension of a model trained on a distribution S to T.
Weaknesses: W1. While I enjoyed reading the formality and the definitions in section 2, I did not find the so-called no delivery service (NDS) of data surprising.
As the author also stated, based on the theory of ML and the NFL theorem, when there is a sampling bias, and the source and target distributions are different, the expected performance guarantee of the model does not carry over. This has also been extensively discussed under "lack of generalizability" concept.
W2. It is not particularly a weakness, but if I am not mistaken, the NDS observation for complex social systems naturally follows the previously known heavy-tailed distribution property of these systems.
Technical Quality: 3
Clarity: 4
Questions for Authors: No specific questions
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Please see the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback. I agree that W1 is important to clarify and hope that the response to all authors as well as the below additional clarifications can alleviate your concerns. Since everything is addressable via minor clarification updates, I hope this will also allow you to raise your score.
*W1. While I enjoyed reading the formality and the definitions in section 2, I did not find the so-called no delivery service (NDS) of data surprising. As the author also stated, based on the theory of ML and the NFL theorem, when there is a sampling bias, and the source and target distributions are different, the expected performance guarantee of the model does not carry over. This has also been extensively discussed under "lack of generalizability" concept.*
* Thank you for raising this important question. The key difference to the known results and issues that you are listing is that, to the best of my knowledge, these are based on a "lack of justification" argument, i.e., that we know that our usual guarantees do not apply. In contrast, this paper provides much stronger insights in form of rigorous impossibility results. The insights that can be gained from such results are much more substantial and provide a path forward to improving the situation (while a lack of justification can not). Please see the response to all authors for a detailed discussion of this (also in terms of sufficient versus necessary conditions and their different contributions).
* I also want to highlight that the results in this paper do point to an *actionable* path forward via targeted data collection using the $k$-core condition. This is also discussed in detail in the response to all authors. Again, this is possible because of the theoretical insights in this paper.
* For "lack of generalizability": How I am familiar with this concept in the context of validity theory is mostly as an empirical observation, e.g., that studies do not generalize beyond their very specific context. The results in this paper are again stronger, in the sense that the benchmark itself is invalid (not only that it doesn't generalize to *new* settings).
* The practical importance of the results is also highlighted by the wide-spread usage of benchmarks like MovieLens in settings where it cannot be valid, even in SOTA LLM benchmarks such as BigBench (https://github.com/google/BIG-bench/blob/main/bigbench/benchmark_tasks/movie_recommendation/README.md)
*W2. It is not particularly a weakness, but if I am not mistaken, the NDS observation for complex social systems naturally follows the previously known heavy-tailed distribution property of these systems.*
* The NFDS results in this paper are two-fold:
* They first establish necessary conditions that need to hold for model validation to be valid
* and then show that these conditions are violated when sampling from complex social systems
* As such the necessary conditions hold independently of the concrete application in social systems.
* While the heavy-tailed distribution in complex systems have indeed been established prior to this work, one contribution of this paper is to combine this knowledge with the newly established necessary conditions to gain non-trivial insights into current practice of AI. | Summary: The paper addresses the critical issue of model validation in AI systems, especially those deployed in complex social environments. It argues that the prevalent train-test paradigm, commonly used for model validation, is often invalid in these settings due to the inherent assumptions it violates. The paper presents formal impossibility results, demonstrating that for many AI tasks involving complex social systems, the train-test paradigm cannot ensure valid model performance assessments. The study uses the MOVIELENS benchmark to illustrate these issues and suggests remedies like participatory data curation and open science to address the epistemic limitations identified.
Strengths: 1. The paper provides novel insights into the limitations of the train-test paradigm, especially in the context of AI systems interacting with complex social systems.
2. It offers formal proofs of the epistemic limitations, providing a robust theoretical foundation for its claims.
Relevance: The study addresses a highly relevant issue in modern AI, given the increasing deployment of AI systems in socially impactful contexts.
3. Using the MOVIELENS benchmark to illustrate the theoretical points adds practical relevance and makes the arguments more tangible.
Weaknesses: 1. The formal proofs and theoretical discussions might be too complex for practitioners without a strong background in the relevant mathematical and statistical concepts.
2. While the paper suggests remedies, it does not delve deeply into how these can be practically implemented on a large scale, which could limit their immediate applicability.
3. The results are heavily dependent on the specific characteristics of the social systems and data collection methods considered, which might limit the generalizability of the findings to other contexts or systems.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could you elaborate on alternative model validation paradigms that could be more suitable for complex social systems?
2. Could you provide more examples or case studies beyond the MOVIELENS benchmark to illustrate the validity issues in different types of AI systems?
3. Could you discuss any potential limitations of your approach and how they might be addressed in future research?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. To make the paper more accessible, consider simplifying some of the theoretical discussions or providing more intuitive explanations alongside the formal proofs.
2. Include more detailed discussions on how the suggested remedies, such as participatory data curation, can be practically implemented in real-world scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback. I agree that your comments address important question and hope that the response to all authors as well as the below additional clarifications can alleviate your concerns. Since everything is addressable via minor clarification updates, I hope this will also allow you to raise your score.
*The formal proofs and theoretical discussions might be too complex for practitioners without a strong background in the relevant mathematical and statistical concepts.*
* Thank you for the suggestion. My aim is certainly to make this paper as accessible as possible, and I tried to do so via informal statements of the main results and providing extensive context. Unfortunately, the page constraints set limit on how much this is possible. However, I will follow your suggestion and add more intuitive examples where it is possible.
* While I acknowledge the issue, I'd also point out that the impression might not be fully uniform, e.g., feedback from other reviewers states that the paper is "easy to follow and fun to read", "very well-written".
*Could you elaborate on alternative model validation paradigms that could be more suitable for complex social systems*
* Thank you for raising this question. With my current knowledge, I would not so much point to alternative model validation paradigms but rather to improved data collection for model validation via the paper's $k$-core condition. For details on this, please see the response to all authors.
*Could you provide more examples or case studies beyond the MOVIELENS benchmark to illustrate the validity issues in different types of AI systems*
* Thank you for this question. Indeed, these results apply to a wide range of settings. In the PDF attached to the response to all authors, I illustrate this using widely used benchmarks for three different settings:
* Reasoning - FB15K-237 (https://paperswithcode.com/dataset/fb15k-237): Here the goal is to infer the truth value of (subject, predicate, object) triples using logical reasoning. It is a widely used benchmark for reasoning and, as can be seen from the PDF, has the same structural properties as MovieLens. As such the results of this paper apply directly (also discussed in Section 3 and Appendix D.3). The social system the generates the biased observation is both the production of knowledge and its recording in FreeBase.
* Link Prediction in Graphs - CORA (https://paperswithcode.com/sota/link-prediction-on-cora): Here, the goal is to predict links/edges in a citation graph based on observed edges. It is a widely used benchmak in graph learning and again shows the same structural properties at MovieLens. The social system that generates the biased observations is the citation practice in science (incl. popularity bias etc).
* Extreme Classification - Wiki10-31k (http://manikvarma.org/downloads/XC/XMLRepository.html): Here, the goal is to predict the correct labels for Wikipedia entries from a large number of user provided labels (hence extreme classification). The social system that generates the biased observation are the authors that contribute Wikipedia entries and their labeling practice.
* In addition to these datasets, I also want to highlight that MovieLens is THE recommender systems benchmark (comparable to MNIST for vision) and is still widely used, even in SOTA LLM benchmarks such as BigBench (https://github.com/google/BIG-bench/blob/main/bigbench/benchmark_tasks/movie_recommendation/README.md)
*Could you discuss any potential limitations of your approach and how they might be addressed in future research?*
* Please see the limitations section in appendix of the paper.
*The results are heavily dependent on the specific characteristics of the social systems and data collection methods considered, which might limit the generalizability of the findings to other contexts or systems.*
* The NFDS results in this paper actually are two-fold:
* 1) It first establishes necessary conditions that need to hold for *any* model validation to be valid
* 2) and then shows that these conditions are violated when sampling from complex social systems
* As such, the necessary conditions in 1) hold independently of the concrete application in social systems and provide general insights into the validity of model validation.
* That being said, I again want to highlight the urgent need to also understand the interaction with social systems since this is affecting much of our practice. Hence, even if the results would only be contained to this setting, I wouldn't consider it a limitation. | null | null | Rebuttal 1:
Rebuttal: Thanks to all reviewers for their insightful feedback, it will help me to clarify important aspects of the paper and improve its impact. Before addressing the reviewers' concerns in detail, I am happy to acknowledge the overall very positive feedback from all reviewers on soundness, contribution, and presentation, e.g.,
- "very well-written", "tackles a very important question that has many implications" (qRpv).
- "a problem of immense importance", "formal and careful writing scheme, which makes it easy to follow and fun to read" (9VqZ)
- "novel insights" into a "highly relevant issue in modern AI" (RViy)
In the following, I will address two questions that came up in different form across reviewers. Please see the individual responses for further discussions related to points that are specific to a single review. I hope that the following discussion can alleviate the reviewers' concerns and allow to raise their scores since all points can be addressed easily via minor clarifications.
## Practical implications / prescriptive argument
Reviewers raised questions with regard to the practical implications of the theoretical results, i.e., are they useful to improve our data collection efforts?
I agree that this is an important question and, indeed, the theoretical results of this paper provide *direct* insights into how to improve data collection for model validation via its $k$-core conditions. In particular, lemma 2 and corollary 3 imply two clear objectives for targeted data collection:
- a) collecting data points that increase the $k$-connectivity of the sample graph. This would increase the complexity of the world that can be assumed such that model validation is still valid for the *entire sample graph*
- b) collecting data points that increase the size of the $\text{rank}(f)$-core of the sample graph, where $\text{rank}(f)$ is the complexity of the world that we want to assume. This would increase the *size of the subgraph* for which a $\text{rank}(f) = k$ assumption would still yield valid model validation
Hence, both objectives are based on the k-core condition and attack it from different angles: increasing the minimal complexity that we can assume for the entire graph or increasing the size of the valid subgraph for a given complexity. Moreover, both objectives can be computed from the known sample graph (doing this efficiently is non-trivial though).
I thank the reviewers for highlighting the issue and agree that it is not very clear from the current write up. Unfortunately, when fitting the content within the page limit, I missed that the discussion of this aspect has suffered. I will include the above discussion in improved form in the updated manuscript.
In this context, the argument for participatory methods stems from the sheer amount of data that would need be collected in a *targeted* way according to the scheme above. While we are currently preparing a paper for submission that introduces a novel method for this purpose, I believe that it is beyond the scope of this paper to also cover this aspect in detail, (e.g., it would need to introduce additional concepts related to mechanism design, game design, economics, and the efficient computation of the $k$-core objectives). However, I will add a further discussion why participatory data collection is favorable for large amount of data + targeted collection.
## Lack of justification versus Impossibility results
Reviewers raised also questions with regard to the significance of the theoretical results relative to known issues and results (e.g. known lack of performance guarantees, known limitations of observational data,)
The key difference between these known issues/results and this work is the difference between a lack of justification (known) and rigorous impossibility results (novel contribution of this work). For instance, while it is clear that standard learning theory does not apply to OOD settings, sampling bias etc. it is not clear that specific methods and practices are not valid. It just means we have no justification for them.
In contrast, the impossibility results in this work are much stronger. They show that there *cannot* be any method that leads to valid results in this setting, even for methods where this is not obvious at all such as counterfactual estimators. These results are also especially important in the context of scaling, which in most cases is exactly passive data collection from complex social systems and which is the dominant approach *today*. The results of this paper establish rigorous limits of this approach and show that alternative approaches are needed (such as the k-core approach discussed above).
Another way to look at it is in terms of *necessary versus sufficient conditions*. Sufficient conditions, which standard learning theory is often based on, provide insights into a specific/narrow case. They are important to motivate the validity of a specific method but do not say much when they are violated. On the other hand, this work establishes necessary conditions which provide conditions that have to hold for *every* case. Since they exclude a large set of hypotheses that otherwise would have to be explored, they provide important insights when the path forward is not entirely clear. This is exactly the case for evaluation in modern AI and why I believe the results of this paper are badly needed.
Again, I would like to thank the reviewers for highlighting this question. I agree that the discussion of this aspect is currently not optimal and will improve it along the above lines in the updated paper.
## Additional results
Per request of RViy, I have also attached a PDF with experimental results for additional settings on widely used datasets, relating the paper's results to reasoning (FB15k-237), graph link prediction (Cora) and extreme classification (Wiki10-31k). Please see also the response to RViy for further details.
Pdf: /pdf/ab7ee1ef8125d0f38a39a848208ab42faab143e0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
How does Gradient Descent Learn Features --- A Local Analysis for Regularized Two-Layer Neural Networks | Accept (poster) | Summary: The present manuscript concerns the study how gradient descent achieves feature learning in a certain class of two-layer neural networks.
The main idea of the manuscript, which builds heavily on previous line of works, is to consider how the population loss is minimized during the first steps of gradient descent. Using that after these steps the student network spans the feature space of the teacher network, the authors show that the successive steps of gradient descent are essential to fully recover the teacher network.
Strengths: The results of the paper show that learning both layers of 2-layer neural networks can be lead to a perfect recovery of the teacher network. This is a stronger result than what was previously known in the literature, where the first layer of the network was trained only for few initial steps and then fixed, leading to a particular class of random feature models.
Weaknesses: As far as I understand the paper concerns mainly the minimization of the population loss and there are no statements about the empirical loss and how sample complexity enters in the results that are presented.
Technical Quality: 3
Clarity: 3
Questions for Authors: -
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank reviewers for the feedback. For the concern about sample complexity, please refer to the general response where we address the questions. | Summary: This paper studies the learning properties of networks trained with gradient descent. More precisely, the authors focus on the late stages of the dynamics where the algorithm learns the ground truth directions. These findings extend the usual ones in the literature that are focused on the early stages of the dynamics. The main theoretical result shows through a local landscape analysis the presence of a strong notion of feature learning, i.e., matching the ground-truth teacher directions in a non-simplified setting where second-layer weights are allowed to have negative values.
Strengths: The main strength of this submission is the nice theoretical contribution. The results are proved through a challenging local landscape analysis that significantly extends previous contributions.
Weaknesses: This submission has no strong weaknesses. However, the presentation could be improved in some parts of the manuscript. I suggest in the following section some possible changes to enhance to quality of the presentation for non-expert readers.
Technical Quality: 2
Clarity: 2
Questions for Authors: - I would emphasize more the local loss landscape analysis. As of now, it appears in a small paragraph at the end of Page 2, but introducing the challenges that the authors face for this type of analysis would help to grasp the quality of the contribution.
- The authors correctly highlight many related works that focus on feature learning in the early stages of gradient descent dynamics and how they surpass kernel methods/random features. I believe it would help the non-expert reader to mention [1,2] that showed an equivalent non-linear feature map for networks trained with 1 step. This contrasts the noisy linear equivalent map of random features emerging through Gaussian equivalence.
- The nice theoretical characterization of this work looks at a stronger feature learning metric, in contrast with closely related works that focus only on weak recovery. Are there other works that dealt with this matter in the context of gradient descent learning? I believe it would be nice to mention more generally previous works that made this "feature learning distinction" even in different contexts. For example, see [3] for Bayes-optimal learning of two-layer networks (specialization phase transition).
- In equation (3) you preprocess the target network to remove its first Hermite coefficient. As correctly mentioned by the authors, this is reminiscent of the procedure done by Damian et al. (2022); however, it would be nice to describe what would happen if such a pre-processing could not be done. Are the strong feature learning capabilities of this network lost due to the presence of a non-vanishing first Hermite direction?
- Could the author be more precise on how they would extend the findings to a polynomial number of samples after Theorem 2? At that point, the hardness of the target function (e.g. information exponent) would matter?
- What do the authors mean when saying "complexity of $f_*$" on Page 4? It would be nice to mathematically formalize this concept.
- The key passage at the end of page 4 is a bit obscure to me and I would suggest rephrasing it more clearly, e.g., remind $\varepsilon_0$.
- What is $\bar{w}_i$ at page 5?
- What is meant for $\varepsilon_0$-net at page 5?
- A schematic drawing of the descent direction after the description on page 6 would help the reader to grasp intuitively the concepts.
[1] A theory of non-linear feature learning with one gradient step in two-layer neural networks. Moniri et al. ICML 2024
[2] Asymptotics of feature learning in two-layer networks after one gradient-step. Cui et al. ICML 2024
[3] The committee machine: Computational to statistical gaps in learning a two-layers neural network. Aubin et al. NeurIPS 2018.
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The limitations are addressed in the submission.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank reviewer's efforts to provide detailed feedback. We will incorporate suggestions on the presentation of the paper and discuss more related works in the revision. Below we try to address reviewer's concerns.
> In equation (3) you preprocess the target network to remove its first Hermite coefficient ...
Removing the first Hermite direction is important for the current analysis and Damian et al. (2022). If there is a non-vanishing first Hermite direction, then it will become the dominate term in the first-step gradient. To be more specific, after first-step gradient update neurons $w_i^{(1)}\approx c_1 \beta +c_2 Hw_i^{(0)}$, where $c_1,c_2$ are scalars, $\beta$ is the first Hermite direction and matrix $H$ has full rank within the target subspace due to Assumption 3. When first Hermite direction $\beta\neq 0$, $\beta$ dominates so $w_i^{(1)}\approx c_1 \beta$. This makes all neurons collapse into a single direction.
We would also like to note that this issue is likely a technical difficulty introduced by the setting we consider. In experiments, we can still achieve strong feature learning ability with proper choice of hyperparameters. Providing rigorous analysis beyond current setting is an interesting and challenging problem to consider in the future.
> Could the author be more precise on how they would extend the findings to a polynomial number of samples after Theorem 2 ...
Please refer to the general response where we address the questions regarding sample complexity.
> What do the authors mean when saying "complexity of $f_*$" on Page 4? It would be nice to mathematically formalize this concept.
Thanks for the suggestion. We will make it clear in the revision. Here by ``complexity of $f_*$", we mean it only depends on the quantities of $f_*$, such as teacher neurons' norm $|a_i^*|,\|\|w_i^*\|\|$, target subspace dimension $r$ and angle separation $\Delta$.
> What is $\bar{w}_i$ at page 5?
It is the normalized version of neuron $w_i$, i.e., $\bar{w}_i = w_i/\|\|w_i\|\|_2$.
> What is meant for $\varepsilon_0$-net at page 5?
We mean an $\epsilon$-net with $\epsilon=\varepsilon_0$. That is, neurons cover the whole target subspace well in the sense that for every direction $v$ in the target subspace, there exists a neuron $w$ such that the angle between them is small $\angle(w,v)\le \varepsilon_0$.
> Presentation of paper and discussion of prior work.
We will improve the presentation of paper based on the suggestions and discuss more related works in the revision.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: I sincerely thank the authors for their rebuttal, I have no further concerns to discuss and I believe the proposed changes will improve the submission. After carefully reading the response along with other reviewers’ comments I would like to keep my score as in the original review. | Summary: The present paper studies feature learning in the end phase of training. The authors show that when the loss is small, gradient steps capture relevant directions.
Strengths: - The problem of feature learning studied in the paper is important.
- From a technical point of view, the analysis of phase 3 of the algorithm that shows that the local landscape is benign is interesting and can be used in other problems.
Weaknesses: Comments:
1. In the abstract, the authors write "We show that once the loss is below a certain threshold, gradient descent with a carefully regularized objective will capture ground-truth directions". Where is this threshold? Looking at theorem 2, its hard to connect the description in the abstract to the statement proved here.
2. The relation between assumption 3 and information exponent should be made explicit. Based on my understanding, the information exponent of the teacher function is always one (because it is assumed that it has a linear part).
3. What happens if information exponent is larger than 1?
4. Why is the head a and the back layer W have the same regularization parameter \lambda? Also, in line 128, it seems that you are not analyzing the problem at \lambda = 0, but you study the problem in the ridgeless case where \lambda \to 0.
5. The analysis is for gradient descent ran on expected loss (Eq. 2). This problem is not very realistic and a finite sample analysis should be performed. What is the sample complexity here? I don't think a concentration type argument is possible here; at least in the realistic setting where the dimenion of the covariates and the number of samples are roughly in the same order.
6. The algorithm analyzed in this paper is not close to typical training methods used in practice. What is the role of phase 2? Why are we normalizing in this particular way?
7. In the discussion below Theorem 2, the authors write " In these works, neural networks only learn the target subspace and do random features within it". Is this correct? Specifically what result are the authors pointing towards? Can the authors be more formal here? This discussion is very vague.
8. Instead of the lengthly discussion on the construction of the dual certificate, I think the authors should have discussed in more detail the general take-aways of this result.
9. Missing citations and discussion of prior work. The following (highly relevant) paper have not been discussed. Papers [1] and [2] are missing in the discussion of feature learning in the early phase of training. The authors should also discuss other approaches to analyze feature learning [3], [4], etc.
[1] B Moniri, D Lee, H Hassani, E Dobriban. A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks.
[2] H Cui, L Pesce, Y Dandi, F Krzakala, YM Lu, L Zdeborová, B Loureiro, Asymptotics of feature learning in two-layer networks after one gradient-step.
[3] A Radhakrishnan, D Beaglehole, P Pandit, M Belkin, Mechanism for feature learning in neural networks and backpropagation-free machine learning models.
[4] D Beaglehole, I Mitliagkas, A Agarwala, Gradient descent induces alignment between weights and the empirical NTK for deep non-linear networks.
Technical Quality: 3
Clarity: 2
Questions for Authors: please see weaknesses
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: please see weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank reviewer's efforts to provide detailed feedback. We will try to address reviewer's concerns below and incorporate the suggestions in the revision.
>1. In the abstract, ...
Theorem 2 is a combination of early-stage feature learning (Stage 1 and 2) and final-stage feature learning (Stage 3). This sentence refers to the local convergence result (Stage 3), and the threshold is $\varepsilon_0$ in Theorem 2. We show at the end of Stage 2 (Lemma 4) the loss/optimality gap is below $\varepsilon_0$ so that we enter the local convergence regime in Stage 3 (Lemma 5). We agree that this should have been made clearer and we will address this issue in the revision.
> 2&3. The relation between assumption 3 and information exponent should be made explicit ...
Thanks for the suggestion. We will make it more explicit in the revision. The information exponent of teacher network is indeed 1. Assumption 3 we made here is the same as in Damian et al. (2022), and we use the same preprocessing procedure to remove the linear part. After this preprocessing step, the information exponent becomes 2 due to Assumption 3.
Our results hold when information exponent is 2 after preprocessing. If it's larger, the current analysis does not work, and it would require a generalization of Damian et al. (2022) to make it work (Dandi et al. (2023) explored a generalized version of this problem and left it as a conjecture).
Neural networks can learn representations with gradient descent, Damian et al., 2022
How Two-Layer Neural Networks Learn, One (Giant) Step at a Time, Dandi et al., 2023
>4. Why is the head a and the back layer W ...
- Regularization on $a$ and $W$:
This is the same as using weight decay on both layers. At the minimum we have $\|\|w_i\|\|_2^2+|a_i|^2 = 2\|\|w_i\|\|_2|a_i|$, so $\ell_2$ regularization $\sum_i\|\|w_i\|\|_2^2+|a_i|^2$ becomes $2\sum_i\|\|w_i\|\|_2|a_i|$ that is effectively $\ell_1$ regularization over the norm $\|\|w_i\|\|_2|a_i|$ of neurons. Such a regularization favors sparse solution and helps us to recover the ground-truth directions and remove irrelevant directions.
- Analyze the problem of $\lambda=0$ or $\lambda\to0$:
Our goal is to minimize the loss with $\lambda=0$. However, directly analyzing the unregularized case is challenging so we choose to analyze the case where $\lambda$ is positive but gradually decreasing to 0 ($\lambda\to 0$). In this way, the solution we get will eventually converge to the minima with $\lambda=0$. In Theorem 2 we show that the unregularized loss ($\lambda=0$) is small at the end of training.
>5. The analysis is for gradient descent ran on expected loss ...
Please refer to the general response where we address the questions regarding sample complexity.
>6. The algorithm analyzed in this paper ...
- Non-standard algorithm:
Recent feature learning literature often focuses on early-stage learning, using layer-wise training (training 1st-layer weights $W$ first, then 2nd-layer weights $a$), which is also not standard in practice. As discussed in the paper, analyzing the entire training dynamics of standard gradient descent, especially the middle stage (Stage 2), is technically challenging and remains an open problem. We hope our results contribute to understanding standard training methods.
- Role of Stage 2:
In our algorithm, the Stage 1 and 2 essentially follow the similar layer-wise training procedure that are common in early-stage feature learning literature. The role of Stage 2 is to do a regression on top of the learned feature after Stage 1. This makes the loss small so that we enter the local convergence regime in Stage 3.
- Regularization and balancing:
We assume `normalizing' mentioned by reviewer refers to the regularization or the norm balance. We'd be happy to further clarify if this does not answer the original question.
The regularization is essentially the same as $\ell_2$ regularization or weight decay used in practice, as $\|\|w_i\|\|_2^2 + |a_i|^2 \ge 2\|\|w_i\|\|_2|a_i|$, with equality at the minimum of $\ell_2$ regularized loss. The reason of using weight decay is further discussed in Section 5: it induces an effective $\ell_1$ regularization over the neurons and helps to reduce norm cancellation between neurons.
The norm balancing step is just a preparation step for Stage 3 for technical convenience to enter the local convergence regime.
>7. In the discussion below Theorem 2, ...
We will make it clear in the revision. To be more specific, we could look at Damian et al. (2022) (although the discussion also applies to other works that rely on (large) first-step gradient, e.g., Ba et al. (2022), Abbe et al. (2022)). It is shown that after the first-step gradient update neurons $w_i^{(1)}\approx c Hw_i^{(0)}$, where $c$ is a scalar and $w_i^{(0)}$ is neuron $w_i$ at random initialization. Given the assumption that matrix $H$ has full rank within the target subspace, $w_i^{(1)}$ is essentially sampled randomly from target subspace. This suggests neural networks in fact only learn the target subspace and do random feature within it.
>8. Instead of the lengthly discussion on the construction of the dual certificate, ....
Thanks for the suggestion. We do believe the construction of the dual certificate is the core technical contribution. However We will clarify general take-aways more clearly in the revision. Our main finding is that training both layers of 2-layer networks to convergence can achieve 0 loss and recover ground-truth directions. This finding highlights a strong form of feature learning that complements the early-stage feature learning literature, where the first-layer weights are typically fixed after one or a few steps of gradient descent.
>9. Missing citations and discussion of prior work...
Thanks for the references. We will add discussions about these works in the revision.
---
Rebuttal Comment 1.1:
Comment: I thank the author for their response. This resolves most of my concerns regarding the paper.
This paper mostly focuses on minimizing population loss instead of the training loss. However, the analysis of even the population loss is already very challenging. Thus, I will increase my score to 6. | null | null | Rebuttal 1:
Rebuttal: We appreciate the reviewers' detailed feedback and their efforts in providing constructive comments. One common question raised by reviewers is about sample complexity. We try to address it below.
First, we would like to emphasize that even the analysis on population loss is highly non-trivial and requires new ideas that we developed in the paper. The finite-sample analysis is not our focus, so we omit it in the current paper.
For sample complexity, we believe the following strategy would work to get a polynomial sample complexity. We can break down the analysis into 2 parts: early-stage feature learning (Stage 1 and 2) and final-stage feature learning (Stage 3).
- Stage 1 and 2: This should follow the results in Damian et al. (2022). The most important step is to show the concentration of first-step gradient (Stage 1). As shown in Damian et al. (2022), using concentration tools we can get sample complexity $n = \Theta(d^2)$, where $n$ is the number of sample and $d$ is input dimension.
- Stage 3: In local convergence regime, all weights have norms bounded in $O(1)$ due to $\ell_2$ regularization we have. Thus, we can apply standard concentration tools to show the empirical gradients are close to population gradients given a large enough polynomial number of samples.
In this way, we could get a polynomial sample complexity result.
Below we address more detailed questions on sample complexity:
- Better sample complexity
It is indeed not clear how to get a better sample complexity bound like the one reviewer asked for ($n=\Theta(d)$). A simple parameter counting approach would suggest likely $\Theta(dr)$ samples are needed. As discussed in Damian et al. (2022), they showed at least $\Theta(d)$ sample is required in this setting and only provided a sample complexity depending on $\Theta(d^2)$. Moreover, if we focus on the first-step gradient, Dandi et al. (2023) showed that with only $n=\Theta(d)$ samples only 1 direction could be learned. In contrast, $n = \Theta(d^2)$ is essential for neurons to learn multiple relevant directions of the target with a single gradient step. This suggests more than $\Theta(d)$ samples might be necessary to learn multi-index models like the 2-layer nets we consider in the paper. Improving sample complexity, especially in early-stage feature learning (e.g., Abbe et al. (2023), Damian et al. (2024), Arnaboldi et al. (2024)), is an active and interesting research topic, which we leave for future work.
- Impact of information exponent
The information exponent indeed would matter for sample complexity. Due to Assumption 3, the information exponent after preprocessing step is 2 in our setting. In the outline above, $d^2$ sample complexity appears in Stage 1 and 2. This is because in fact we need an accurate estimation of degree-2 Hermite polynomial (matrix $H$), as discussed in Damian et al. (2022). When the target function is harder (information exponent is larger), we expect the sample complexity would increase accordingly.
Neural networks can learn representations with gradient descent, Damian et al., 2022
Sgd learning on neural networks: leap complexity and saddle-to-saddle dynamics, Abbe et al., 2023
How Two-Layer Neural Networks Learn, One (Giant) Step at a Time, Dandi et al., 2023
Computational-Statistical Gaps in Gaussian Single-Index Models, Damian et al., 2024
Repetita Iuvant: Data Repetition Allows SGD to Learn High-Dimensional Multi-Index Functions, Arnaboldi et al., 2024 | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Explicit Flow Matching: On The Theory of Flow Matching Algorithms with Applications | Reject | Summary: The paper proposes a loss for training flow matching (rectified flows, stochastic interoplants) that is based on integrating the target velocity over the available data as the regression target. This reduces the variance of the gradient estimator and can also be applied in the stochastic variant of flow matching.
Strengths: To the best of my knowledge, the idea is novel to use a larger batch of data for the velocity target in the flow matching objective than as the input for the network. This reduces the variance in the loss estimate.
The additional compute over the FM loss seems negligible, since only the target velocity field is adapted, but the network still receives batches. This makes it an attractive improvement to flow matching training.
Weaknesses: I genuinely like the idea, but cannot recommend acceptance in the current presentation. I am happy to be corrected on any point and adjust my score.
Below I grouped the weaknesses by category.
## Theoretical contributions can be simplified and potentially reveal existing result (update: largely addressed, but presentation to be improved)
I think the theoretical derivation can be greatly simplified:
1. The notation is overly complicated. Why not use the standard notation for joint $p(a, b)$, conditional $p(a|b)$ and marginal $p(a)$ probabilities instead of index notation, where the index sometimes means "joint", "conditional" or "marginal", and sometimes indexes time $t$ or a condition such as $x_1$? (or $\\rho$ instead of $p$).
2. I think this reveals that the new loss is simply obtained by writing the target velocity in the ExFM loss as the expectation over the training data $\\rho_1$: Eqs (8, 10) say that the target velocity is given by $\\int w(t, x_1, x) \\rho(x|x_1, t) dx $, that is just average the velocities over the entire training data, weighted by the probability that the path actually (where $x$ is sampled from the linear conditional paths). Given this observation, it seems to me that the actual contribution of the paper does not lie in this new loss, but how to efficiently estimate this integral, which is currently hidden in Appendix B.
In fact, I think that sections 2.1 and 2.2 can be merged to a simple importance sampling argument in the original flow matching argument.
I also think that the authors are missing that their third contribution has already been derived in the same form by their reference [10] in Eq. 4 and I think the second contribution is a simple extension to different conditional flow fields resulting from different inversions $\\varphi^{-1}$.
## Evaluation on tabular data is wrong (update: fixed)
The NLL defined in Appendix H.5.3 does not contain the volume change, which is an integral part of the negative log-likelihood. Did you use this equation for evaluation? My current score reflects the belief that volume change was accounted for.
Also, in Table 3, sometimes the highest values and sometimes the lowest values in each row are marked bold. Which model is better and does this use the incorrect formula? Please update without e-notation, adapting -1.29E+02 to -129, this is hard to read.
## Toy data evaluation (update: fixed)
It is easy to construct a very good approximation for the moons distribution by taking a Gaussian mixture of values for Table 4.
## Typos (has no influence on my recommendation)
Here is a list of what I found:
- l. 30: introduced -> introduce
- l. 33: base -> based
- l. 65: $rho$ -> $\\rho$
- l. 70: need -> needed
- l. 92/93: using map -> using the map
- l. 100: we return to end of the standard CFM loss representation -> ?
- l. 105: just (unknown) -> just the (unknown)
- eq. 7: the integral shares variables with the outside expression, e.g. add tilde on the integration variables
- l. 124: inevitable -?> invertible
- l. 126: have -> has
- eq. 16: consider moving numerical tricks like using softmax from the theory section of the paper, page 6 already introduces a lot of notation.
- throughout: dispersion -> variance
Technical Quality: 2
Clarity: 2
Questions for Authors: - Is the variance a problem in high dimensions? I would expect that overall there are few collisions of conditional flows paths.
- It is not fair to compare the MSE value of CFM and ExFM directly in Table 2, since they compute different quantities, so why list it?
- Why would the authors expect that their method performs beneficially in terms of the optimal transport distance (NPE)? OT-FM is explicitly built to reduce it, why would ExFM provide a benefit since it has the same minimum as CFM?
- What is the difference between Table 3 and 6?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: I do not think that the limitations of the work are properly addressed in the theoretic part, in contrary to the statement of the authors in the paper checklist. In particular, I did not fully understand how accurately Eq. (10) (which is part of the loss) can be estimated. One question that can be a way towards addressing this is by expanding on when the assumption in line 172/173 is valid (and why this Jacobian is even a problem).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed analysis of our work.
**Let us discuss your comments:**
* ``The notation is overly...``
We've changed the notations and listed changes in the 1-page PDF in the "all reviewers answer".
Wherever we refer to the probability density function,we use the symbol $\rho$ (or similar), whose arguments are real numbers.
Thus we emphasize that these are ordinary functions from the point of view of mathematical analysis; while the letter "P" of the form $\mathbb P$ is usually used to denote the probability of an event that is included as an argument.
We have tried to maintain mathematical rigor, since our paper is positioned as a theoretical paper.
As is customary in probability theory, the expression for the conditional probability $A | B$ used only in cases where $A$ and $B$ are understood as events or random variables (and not as values of random variables), and they are used only under symbols like $\mathbb P$, $\mathbb E$, i.e., when we are talking about the probability of an event, conditional expectation, etc.
* ``This reveals that... ``
We agree that the loss itself could be obtained more easily and with less computation.
We have adressed this comment in the global answer.
* ``The authors are missing that their third contribution ... ``
We agree that this formula is somewhat analogous to ours, as other similar formulas have also can be found in literature.
However, we respectfully disagree that these are consistent with our contribution.
Indeed, our explicit form of loss immediately allows one to write the discrete loss as Eq. (13) or using other techniques from Appendix B to estimate the integral.
To the best of our knowledge, such a training scheme has not been described in the literature before.
In addition, our notation of loss allows, for example, one to obtain explicitly an expression for the vector field (Eq. (37)) in the case of a Gaussian initial distribution and Gaussian Mixture as the target distribution. Such an explicit expression for the velocity has not yet been described.
Eq. (4) of [10] does not have such features.
In addition, we also obtain formulas for the score like Eq. (46).
Our second contribution says that we have obtained such a loss whose minimum can be equal to zero.
In this case, the expression Eq. (4) from [10] as a small modification of the standard CFM loss does not possess this property.
* ``The NLL defined...``
We appreciate the reviewer's careful examination of our work. To compute the NLL, we follow Lipman et al. (2023), Appendix C, Eq. (27)–(33).
So we calculate NLL as
$\hbox{NLL}=-\frac1N \sum_{i=1}^N \Bigl(\ln \mathcal N(x^0_i \mid 0,I) + f^0_i \Bigr).$
* `` In Table 3...``
We thank the reviewer for pointing out the inconsistency in bolding within Table 3. We answered the comment in the global answer.
* `` It is easy to ...``
In the experiments in the Table 4 the primary experimental setup follows that used in [19].
The main goal of these experiments was the proof-of-concept test of the performance of our velicity formula operating in the regime of unknown initial and target distributions against a fresh state-of-the-art approach.
* `` I do not think that...``
We provide some estimates of the error of computing the integral in Appendix B. However, we do not restrict the application of our algorithm to using Self-normalized Importance Sampling (SIS) to estimate the integral in Eq. (10), different methods (including the one described in Appendix B rejection sampling, or SIS with reduced bias) can be used, thus the accuracy of the estimation depends on the chosen method, and the choice of the best method and its error is a part of future work.
The independence of the Jacobian from $x_1$ is checked directly for each chosen map.
In all cases we used, this was true.
**Answering your questions:**
* `` Is the variance...``
Indeed, as our experiments show, the presented algorithm performs much better on small dimensions.
However, this property is also observed for other algorithms, say, the same is true for OT-CFM.
Near the point $t=0$ our algorithm will still have a better variance as we have proven theoretically.
* `` It is not fair...``
While we employed MSE as a loss function to demonstrate the impact of our proposed method's reduced variance on overall performance, a direct comparison between the two models can be misinterpreted. To address this, we have removed the aforementioned table as it lacked clarity. New tables can be seen in 1-page PDF.
* `` Why would the authors...``
We don't expect to be better than OT-CFM at an inference step. However, let us emphasize our key difference from the OT-CFM approach. The first is that we have certain theoretical guarantees, which we formulated in Theorem 2.4. and Proposition 2.5--2.6. OT-CFM uses minibatch-OT heuristics, for which, to the best of our knowledge, there is no such rigorous research.
Second, our variance reduction algorithm is rather a by-product of our theoretical work. And at the same time, our algorithm showed the same, or even better in some cases, variance reduction that the OT-CFM (specifically designed for this purpose) shows. In addition to the algorithm itself, we have derived a zoo of exact formulas in the paper, along with analysis of, for example, the case of stochastic additive, etc. OT-CFM, as we know, is only a heuristic aimed at obtaining a more efficient algorithm. On the contrary, we tried to present a theoretical analysis on the basis of which many future amendments to existing algorithms can be built or new ones can be invented.
We hope that among these algorithms, there will be some that will reduce the NPE.
* `` What is the difference...``
The reviewer is correct in identifying the similarity between Tables 3 and 6. This was an oversight on our part. We apologize for any confusion this may have caused. Table 6 has been removed from the appendix in the revised version.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the comments. However, I still have concerns regarding central points: I think that their method can be derived significantly simpler, and I am confused about the novelty of some results.
- `Simplification`
I cannot find an answer to my claim that the ExFM loss can be derived significantly simpler in the general answer. Can the authors please address my question explicitly? Since the authors agree that the derivation can be greatly simplified, I would in addition appreciate a high-level overview over the simplified variant and a proposed structure for the revised paper (in case the authors have already updated the manuscript available, it is often possible to post an updated PDF under anonymous link; ask the area chair for guidance).
- `Novelty of results`
I remain confused about the similarity to existing work, so let us establish a common understanding.
The authors comment "that this formula is somewhat analogous to ours, as other similar formulas have also can be found in literature. However, we respectfully disagree that these are consistent with our contribution." So the existing results are analogous but inconsistent?
Starting from the list of contributions in the Introduction, can the authors for each theoretical contribution (i) give the central lines where this contribution is achieved and (ii) what the closest known result is, together with a list of novel contributions?
- Minor point: `Jacobian independence`
Can the authors say why the assumption in l. 172 is necessary and what it means?
---
Rebuttal 2:
Comment: We thank Reviewer for additional important questions and for the discussion. We will answer them one by one.
* ``Simplification
I cannot find an answer to my claim that the ExFM loss can be derived significantly simpler in the general answer. Can the authors please address my question explicitly? Since the authors agree that the derivation can be greatly simplified, I would in addition appreciate a high-level overview over the simplified variant and a proposed structure for the revised paper (in case the authors have already updated the manuscript available, it is often possible to post an updated PDF under anonymous link; ask the area chair for guidance).``
If one interested only in heuristics and want to derive some formula or method
only for the practical verification, then he could go down the following path.
Let us take the Eq. (8) from Lipman et al. "Flow Matching for Generative Modeling".
Formally, we may substitute $p_t(x|x_1)$ from Eq. (10) and $u_t(x|x_1)$ from Eq. (15)
from the same paper to this expression
to get a special case of our formula.
This is a straightforward way to derive our algorithm in the case of a Gaussian initial distribution.
Note that this is highly intuitive reasoning.
In contrast,
in our work we used the following strict reasoning (somewhere implicitly).
Our conclusions are based on
a) Theorems from the cited Lipman et al on the equivalence of CFM and FM gradients (Theroems 1--2, Appendix A)
b) theorems from Tong et al, "Improving and Generalizing Flow-Based Generative Models with Minibatch Optimal Transport"
which extend the previous result to more general cases (Theorems 3.1--3.4, Table 1).
Thus, these results allow us to start with the losses that are in CFM (in various modifications).
Then we consider an _invertible map_ of type $\phi_{t,x_1}(x_0)=(1-t)x_0+tx_1+\sigma_stx_0$ and in order to proceed to the case of a simple _non-invertible_ map $\phi_{t,x_1}(x_0)=(1-t)x_0+tx_1$ we strictly consider the limit $\sigma_s\to0$. Thus, we maintain the rigor of the reasoning of a theoretical paper.
Since there are a sufficient number of possible modifications of CFM (listed in Table 1 of Tong et al. or in Table 1 of our paper), and there are also many special cases, such as specific initial and target distributions as well as a stochastic modification of the equation in FM that is consistent with DDPM, we could not insert all these combinations in our paper.
Rather, we have described in some detail a method that can be used to further obtain similar results to those we have already given.
But to show the importance of the method, we have given some corollaries of the loss representation in a new form. For example a) a rigorous proof of variance reduction b) the comparability of numerical results of our algorithm, which directly follows from such a loss representation and the results of OT-CFM, which is sharpened for variance reduction c) an explicit record of trajectories for cases of Gaussian distributions (Eq. (36)), explicit recording of velocity for the case of Gaussian Mixture (Eq. (37)), a record of the score in integral form (Eq. (46)), the possibility of applying this technique to cases where both distributions are unknown and there are only samples from both distributions (Eq. (51)--(52)), etc.
All of these results were obtained in one way or another from various loss modifications, similar to the one in native CFM, using our method described in detail in the main body of the paper.
Because we hope that these results do not exhaust all possible applications of our technique, and because of page limitations, we have moved the specific results to Appendix, focusing in the main body of the paper on the method of obtaining them.
Thus, in the main text we have detailed calculations so that it is clear how we get the result step by step.
In the modified version of the article we added additional explanations that we discussed with other reviewers, tables, some figures we discussed in global answer and changed notations. The structure and the simplification of the article has not changed dramatically. We maintain all the nuances of the formulas' derivations as all these steps are the part of the strict analysis, which is also the aim of the paper. Revised version of the paper can be find in this anonymous link https://drive.google.com/file/d/17HDXtwWB505revPOXisi1F2lprECwUUu/view?usp=sharing.
---
Rebuttal 3:
Comment: * ``Novelty of results
I remain confused about the similarity to existing work, so let us establish a common understanding.
The authors comment "that this formula is somewhat analogous to ours, as other similar formulas have also can be found in literature. However, we respectfully disagree that these are consistent with our contribution." So the existing results are analogous but inconsistent?
Starting from the list of contributions in the Introduction, can the authors for each theoretical contribution (i) give the central lines where this contribution is achieved and (ii) what the closest known result is, together with a list of novel contributions?``
Let us go through our contributions.
1. ``A tractable form of the FM loss is presented, which reaches...., but has a smaller variance``
To the best of our knowledge, the result in the form of loss, which could reach zero and which would contain integral expressions from distribution functions, has not been found in the literature (unlike the expression for the vector field, see the next point below). So, apparently, the closest result that corresponds to this contribution is the already mentioned paper by Lipman et al, Eq. (6), about which the authors also write "Flow Matching is a simple and attractive objective, but na¨ıvely on its own, it is intractable to use in practice since we have no prior knowledge for what an appropriate $p_t$ and $u_t$ are."
In addition, the rigorous analysis of variance that is mentioned in this contribution has not been encountered either.
2. ``The explicit expression in integral form for the vector field delivering the minimum to this
loss (therefore for Flow Matching loss) is presented.``
Partial cases, or analogs our Eq. (16), have been in various sources. The closest to ours is probably Eq. (4) and the unnumbered formula from Sec. 5.1 Liu et al, "Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow.".
Also a formula similar to our Eq. (17) with softmax function is found in Scarvelis et al. "Closed-form Diffusion Models", Eq. (2) and others.
But we note that in all the works in which similar formulas are found a) this formulas are not strictly derived b) was given one specific formula; on the contrast, we have proposed a whole family of formulas for the vector field (and score), starting from the case where we evaluate the integral not with SIS but with rejection sampling (Appendix B) and ending with formulas for DDPM-like models, i.e., for ODEs containing a stochastic term, namely the expression for the vector field in Eq (44) or the expression for the score (46). We have not encountered such expressions in the literature.
3. ``As a consequence, we derive expressions for the flow matching vector field and score in several particular cases (when linear conditional mapping is used, normal distribution, etc.);``
We refer here, for example, to the results presented in Fig. 2, Figs. 4--6 (together with the analytical expressions by which they were obtained). We have not encountered similar formulas, similar studies on, for example, Gaussian Mixure separation we cited in the introduction ([15, 8]), but these studies are far from ours in terms of ideology.
4. ``Analytical analysis of SGD convergence showed that our formula have better training variance on several cases``.
In this case, we are talking about Theorems ans resultd from Sec. 2.4 "Irreducible dispersion of gradient for CFM optimization". To the best of our knowledge, no similar studies have been conducted.
5. ``Numerical experiments show that we can achieve better learning results in fewer steps``.
The topic of variance reduction in Flow-Matching-like models is raised most closely in the series of papers by Tong et al. on OT-CFM, where this heuristic technique was created specifically for practical implementation. We are compared with this technique in experiments in different dimensions. It turns out that in many cases we are not inferior to the results of this heuristic. However, our algorithm was rather a side effect of our theoretical calculations. It may well be accelerated if we use more efficient techniques (besides SIS or rejection sampling) for evaluating the integral.
---
Rebuttal 4:
Comment: * ``Minor point: Jacobian independence
Can the authors say why the assumption in l. 172 is necessary and what it means?``
This condition is non-critical, it just simplifies the formula for the vector field. Consider the expression for the probability density function of the intermediate point L97 (in the old notations):
$\rho_{x_1}(x,t)=[\phi_{t,x_1}]_*\rho_0(x):=\rho_0\big(\phi_{t,x_1}^{-1}(x)\big)\det\big[ \partial\phi_{t,x_1}^{-1}(x)\big/\partial x\big].$
When we substitute this expression into the conditional distribution density function (7), the Jacobian under consideration will be in both the numerator and the denominator. But in case it does not depend on $x_1$, in the denominator it can be taken out of the integral and it will reduce with the same expression from the numerator.
Since all important maps under consideration have this property, we considered mainly such simplified formulas (without explicitly writing out the Jacobian).
In the case of the general form of the Jacobian, we came out to the expressions like Eq. (33)--(34), where the Jacobian enters explicitly.
---
Rebuttal Comment 4.1:
Comment: ### Simplification
I read the answer as follows: The authors do not agree that the derivation can be simplified, and stick with the existing derivation although. This is in contrast to their earlier answers suggesting that they indeed found a simpler derivation:
> This algorithm itself could have been obtained in a simpler way.
> We agree that the loss itself could be obtained more easily and with less computation. We have adressed this comment in the global answer.
I would advise the authors to adopt a more direct communication style in the future.
Since I did not come up with an alternative derivation, I consider this point as addressed but strongly advise the authors to try and simplify the derivation for an improved version of the article. I think this will significantly improve the readability and chances of adoption. In my view, a promising starting point would be via identities of expectation values.
### Contributions
Thanks for this list of contributions. I agree that these results are in the paper and highlight that the two-batch-size algorithm is a useful training strategy, as evidenced by the reduced variance. I also think that the derived closed-form expressions are useful.
### Conclusion
Overall, the paper makes a good case for two-batch-size training and has some interesting analytical results. However, the presentation also in the updated version is hard to follow. I therefore increase my score only slightly, to a borderline accept.
---
Reply to Comment 4.1.1:
Comment: We thank the reviewer for the time and effort. We are grateful for the increased score!
We will take the Reviewer’s comments into account as we work to enhance the overall clarity of the presentation. We aimed to maintain a high level of rigour in the derivations to align with the theoretical nature of the work. We understand that this might have hindered readability in some parts. In the revised version, we are going to incorporate intuitive, non-rigorous explanations (simplified derivations of the formulas) to provide a clearer overview before diving into the mathematical details. | Summary: The paper proposes an analytic formula for the vector field satisfying the continuity equation for the given change of density which interpolates between two distributions in the sample space. This is a common setting in the Flow Matching model [6] (Rectified Flows [1], Stochastic Interpolants).
The authors apply the formula for several special cases like linear interpolation between samples and propose to use this formula for the training. They study the proposed training procedure empirically for synthetic examples and CIFAR-10 images.
[1] Liu, Xingchao, Chengyue Gong, and Qiang Liu. "Flow straight and fast: Learning to generate and transfer data with rectified flow." *arXiv preprint arXiv:2209.03003* (2022).
[6] Lipman, Yaron, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. "Flow matching for generative modeling." *arXiv preprint arXiv:2210.02747* (2022).
Strengths: The presentation of the proposed method is clear.
Weaknesses: The proposed formula is already well-known in the community. It was proposed in [1] (see eq. 4, Def 3.1, Sec 5.1 for demonstration) and [2] (see Appendix D). In [3] (see eq. 4), the authors propose to use an analogous formula for training a generative model and conduct a much more exhaustive empirical study. Moreover, [4,5] develop different applied methods building upon this formula.
The downside of this formula is also very well-known in the community, i.e. one has to use large batch sizes to accurately estimate the vector field, which comes with a significant computational cost. Moreover, for any finite size, the estimation of the vector field is biased and the estimation of the loss is biased which would deteriorate generation quality when used to train large-scale models. This limitation is not adequately studied in the paper, e.g. the models are not compared in terms of the training time and memory used.
Given that the main contribution of the paper has already been proposed and the empirical study of this formula is unsatisfactory, I cannot recommend this paper for acceptance.
Minor comments:
- The authors refer to the original FM framework [6] as Conditional Flow Matching, which was proposed in [7].
- The objective proposed in Flow Matching is already tractable, so it is not clear what “a tractable form of the Flow Matching objective” means.
- There is a typo in line 65.
- The derivative notation used is inconsistent with its description in the text.
- From eq. 19, I assume that “dispersion” means “variance”.
[1] Liu, Xingchao, Chengyue Gong, and Qiang Liu. "Flow straight and fast: Learning to generate and transfer data with rectified flow." *arXiv preprint arXiv:2209.03003* (2022).
[2] Neklyudov, Kirill, Rob Brekelmans, Daniel Severo, and Alireza Makhzani. "Action matching: Learning stochastic dynamics from samples." In *International conference on machine learning*, pp. 25858-25889. PMLR, 2023.
[3] Xu, Yilun, Ziming Liu, Max Tegmark, and Tommi Jaakkola. "Poisson flow generative models." *Advances in Neural Information Processing Systems* 35 (2022): 16782-16795.
[4] Scarvelis, Christopher, Haitz Sáez de Ocáriz Borde, and Justin Solomon. "Closed-form diffusion models." *arXiv preprint arXiv:2310.12395* (2023).
[5] Xie, Tianyu, Yu Zhu, Longlin Yu, Tong Yang, Ziheng Cheng, Shiyue Zhang, Xiangyu Zhang, and Cheng Zhang. "Reflected Flow Matching." *arXiv preprint arXiv:2405.16577* (2024).
[6] Lipman, Yaron, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. "Flow matching for generative modeling." *arXiv preprint arXiv:2210.02747* (2022).
[7] Tong, Alexander, Nikolay Malkin, Guillaume Huguet, Yanlei Zhang, Jarrid Rector-Brooks, Kilian Fatras, Guy Wolf, and Yoshua Bengio. "Improving and generalizing flow-based generative models with minibatch optimal transport." *arXiv preprint arXiv:2302.00482* (2023).
Technical Quality: 2
Clarity: 2
Questions for Authors: I have no questions for the authors.
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: The authors do not provide a necessary literature review nor study the limitations of the proposed approach (see Weaknesses section above).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the specific notes. We will be happy to answer any additional
questions you may have.
**Let us discuss your comments:**
* `` ... formula is already well-known...``
Thank you for the links and the comment.
We want to emphasize that we do not present formula (10) (or (11)) as our new main result, but a whole family of formulas, some of which are presented in Table 1, in particular, the situation with an additional stochastic term (Eq. (44) for vector field as well as Eq. (46) for the score).
Let us describe in detail the difference between our work and the cited ones.
We cited paper [1] as [10]. Eq. 4 there is a special case of our formula for linear map.
Def. 3.1 is common definition, one needs to know the explicit form of $X(t)$ to get an explicit formula for the velocity.
Sec. 5.1 describes only (biased) self-normalized importance sampling. We (in Appendix B) propose to use unbiased rejection sampling as well.
Appendix D in the paper [2] Neklyudov et al.
states
"In this section, we find velocity ... the conditional $k_t(x_t\mid x)$ is a Gaussian distribution"
In contrast, we consider _any_ initial and target distributions as well as arbitrary conditional.
We respectfully disagree, that Eq.3 in [3] Xu et. al is similar to ours. It is completely different from our result.
Moreover, in this paper
the Poisson equation (Eq. 1 in [3]) is solved in an extended (of dimension $d+1$) space.
This is fundamentally different from the usual flow equation studied in many Flow Matching articles.
In addition, we focused on theoretical research, in particular, we rigorously argue for less variance in our approach (we submitted to ``Primary Area: Learning theory``). Thus, conducting extensive experiments is not the goal of our paper.
The paper [4] uses a different loss, and, in addition, it does not analyze the implications of the explicit formula, such as the analysis of variance, explicit expressions for the vector field, etc.
Paper [5] was published on arXiv on May, 26 2024, this is later than we sent our full paper to NIPS-2024 (May, 22).
* `` The downside of this formula... use large batch sizes...``
One of the main contributions of our paper from a practical point of view is the use of two batch sizes. One batch size is used in training, it defines the number of members in a loss. It determines how much memory will be used when computing gradients in backpropagation.This size of the batches we have roughly coincides with the one used in similar experiments (such that the the amount of memory used is about the same).The second batch size determines how the right-hand side of the formula for the vector field (e.g., Eq. (15)) will be computed.Since no neural networks is involved in this calculation, we can take the batch size much larger than the first one. Even if there is not enough memory to get this sum at once,its computation can be easily divided into successive computations as well as parallelized.In our numerical experiments, the size of the second batches was significantly ($\sim10^1$--$10^2$ times) larger than the first one.The difference in training time depends on the model used. The time spent on backpropagation is the same if the first batch size is the same. In our algorithm there is additional time for right-hand side computation, but usually this addition is insignificant in case of "heavy" models.
We believe a more in-depth analysis of the computational aspects is beyond the scope of this paper but is certainly a valuable direction for future work.
* `` The authors refer...``
Thank you for pointing this out. We correct the corresponding part of the paper in the revised version.
We would like to note that, to the best of our knowledge, the original authors of [6] did not publicly release code. Consequently, we opted to utilize the implementation from [7] as a solid foundation for our research, as it closely adheres to the original framework.
* `` ...not clear what “a tractable form of the Flow Matching objective” means....``
By "tractable" we meant to write the loss in a way that immediately allows one to write the discrete loss as Eq. (13) or using other techniques from Appendix B to estimate the integral.For example, the possibility of using a different batched size to estimate this integral than the size when training the model. To the best of our knowledge, such a training scheme has not been described in the literature before. In addition, our notation of loss allows, for example, one to obtain explicitly an expression for the vector field (Eq. (37)) in the case of a Gaussian initial distribution and Gaussian Mixture as the target distribution. The study of the Gaussian separation problem is still important in the field of Flow Matching and Diffusion Models and, to the best of our knowledge, such an explicit expression for the velocity has not yet been described.
We have added a corresponding explanation in the modified version of the article.
* `` The derivative notation...``
On L72 we define "Hereinafter the dash indicates the time derivative".
In the vast majority of cases,
dash stood at the expression of $\phi^\prime_{t,x_1} (x0)$ and denoted the derivative of the time parameter $t$.
In Eq. (13) dash means derivative with respect to spatial variable $t_j$.
There are no ambiguities.
In other places, as p24, L624 or L767, we define dash explicitly.
* `` The authors do not...``
The introduction provides a solid foundation and links to relevant literature, including works with comprehensive overviews such as Tong et al. (2023b). However, it's important to note the relative scarcity of existing methods tailored to this specific type of analysis. To this end, we have provided a theoretical analysis of potential practical limitations in Appendix B. Most of our theoretical results are formulated in the form of Theorems or Propositions, thus, their formulation contains the conditions under which they are true.
---
Rebuttal Comment 1.1:
Title: rebuttal acknowledgment
Comment: Thank you for your response. My main concerns remain unaddressed. The main proposed formula is very well known and such minor contributions as using two different batch sizes for estimating two different expectations must be followed up by an extensive empirical study. As a minor comment, I suggest the authors check on the definition of dash https://en.wikipedia.org/wiki/Dash.
---
Rebuttal 2:
Comment: We thank the reviewer for pointing out weaknesses in our paper. At the same time, we respectfully disagree that our contribution consists only of an alogorithm with two different batch sizes. We listed the contribution of the paper in the first bullet in our general reply to all reviewers, and noted that it is much more than a single (already known) formula or a specific algorithm given.
We tried to address all of your concerns in our rebuttal attempt. So, if you specify which of our answers were unconvincing, it would be very helpful for us.
We also apologize for misunderstanding with notation. The point is that we made a mistake with the word ``dash`` (not with the essence of the notations, as we thought at first). To avoid ambiguity, we replaced the corresponding text with the following "Hereinafter the symbol "${}^\prime$" indicates the time derivative:...". | Summary: The paper proposes a novel approach to training flow-based generative model by deriving the conditional flow matching objective function with respect to the flow function. The author argues that this new method of training will reduce variance, add stability, and ultimately lead to faster convergence. Additionally, the reformulation allow derivation of the exact vector field expression, and in some simple cases, enables the computation of the oracle trajectory solution.
Strengths: - If the derivations are correct, this methodology could potentially add some innovations in the field of flow matching.
Weaknesses: - The paper is difficult to follow and lacks clear writing and organization. Specifically, the authors' use of notations is very confusing.
- The mathematical computations do not appear to be very rigorous and some assumptions seem very incorrect. I may have misunderstood some derivations, so please correct me if I'm wrong (see Questions section).
- The experimental results are not very robust or convincing. For instance, while the paper proposes that one of their contributions is the reduction of variance during training, many of the results demonstrate larger variance across numerous, different metrics.
- Overall, the paper feels like it requires substantial revisions and is far from being polished.
Typos:
- Line 65, rho_1$\rightarrow$ $\rho_1$.
- Line 130, practical $\rightarrow$ practical form.
Technical Quality: 1
Clarity: 1
Questions for Authors: - I find Eqn. 2 confusing. The authors should either replace the second term with the conditional vector field notation or explicitly state that the second term represents a derivative with respect to $t$.
- Eqn. 4 does not seem correct. Why can you rewrite the density function $\rho_j(x_1, x_t, t) = \rho_{x_1}(x_t, t)\rho_1(x_1)$? The distribution of the random variable $X_t$ is dependent on the distribution of $X_1$, so how can we write these terms as two independent variables.
- Also, in Eqn. 4 I believe $\rho_{x_1}(x_t, t)\rightarrow\rho_{t}(x_t, t)$. What does subscript $x_1$ mean? Does this mean it is some conditional distribution? This notation seems ambiguous.
- Why do you use the notation $p_m(\cdot)$ and not $p_t(\cdot)$. The distribution is different with respect to $t$, using $m$ makes it seems like a joint distribution of all $t$ values.
- Have the authors tried running experiment on non-Gaussian priors?
- Rather than explaining the training scheme in detail, I would suggest adding pseudo code.
Confidence: 3
Soundness: 1
Presentation: 1
Contribution: 2
Limitations: The author has adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed review and comments.
**Let us also discuss your comments:**
* ``The paper is difficult to follow...``
We have carefully revised the manuscript to enhance its readability and coherence. We make several changes in the notation which we put in one-page PDF in the general answer. While striving for balance in the theoretical analysis, we have included all necessary proofs to support our findings. Regarding notations, we have employed mostly standard probability notations from of mathematical analysis and probability theory.
* ``The experimental results... ``
While we highlight variance reduction as a key contribution, we understand the limitations of standard metrics like W2, NLL, and Energy distance. To address this, we have included supplementary visualizations that provide a more intuitive understanding of sample quality. We are eager to explore additional metrics suggested by the reviewer to further strengthen our findings.
Our loss function consistently exhibits lower values across most datasets, indicating reduced variance compared to baselines. However, we admit that there are instances, particularly in some tabular datasets (especially BSDS dataset), where this reduction is less pronounced.
To provide a more comprehensive understanding of our method's behavior, the paper includes additional figures illustrating the evolution of loss and metrics over training steps (Figures 7, 9, 10, and 8, 11, respectively). These visualizations offer compelling evidence of the variance reduction achieved during training.
We have incorporated additional analyses for toy 2D data, including Energy Distance and loss over steps, as well as expanded visual comparisons and datasets, which also includes density comparisons of distributions. These results are presented in a revised version of the paper and the results of expanded visual comparison can be already viewed in the new supplementary 1-page PDF. Our experiments clearly indicate that our method consistently outperforms CFM and, in most cases, OT-CFM, which was specifically developed to address variance.
**Answering your questions:**
* ``I find Eqn. 2 confusing...``
On the L72 we write: "Hereinafter the dash indicates the time derivative.". We added additional mathematical explanation of the dash symbol to the revised version of the paper : $\phi^\prime_{t,\cdot} (X):=\frac{d}{dt}\phi_{t,\cdot} (x)|_{x=X}$.
* ``Eqn. 4 does not seem correct... ``
The density $\rho_{x_1}$ is defined in L97:
$\rho_{x_1}(x,t)=[\phi_{t,x_1}]_{*}\rho_0(x)$.
The dependence of the value $x_1$ of the random variable $X_1$ is emphasized by the lower index $x_1$.
Indeed, different densities have the same letter for designation ($\rho$) in our paper, and the meaning is clear only from the indices.
We tried to avoid notations like $A|B$ as much as possible,
since such notations are used for events or random variables,
while we are narrating in the stricter terms of integrals,
so the arguments of the functions $\rho$ have the meaning of specific values of random variables,
not the random variables itself.
* ``Also, in Eqn. 4...``
As stated in the answer to the previous question, in this case $x_1$ in the index has the meaning of the dependence of the density function $\rho_{x_1}(\cdot, t)$ on the value $x_1$ of the random variable $X_1$.
However, we have changed the notations to more convenient ones.
* ``Why do you use...``
We change the notations and listed changes in the 1-page PDF in the "all reviewers answer".
From these new notations one can clearly see what we consider to be the argument of the function and what is parameter(s).
Since we considered $\rho$ and similar functions as functions from the point of view of mathematical analysis and worked more with integrals than with mathematical expectations, from this point of view it does not matter where to write the parameter -- in the lower index, or as an argument; the choice of one or the other designation is dictated by convenience.
We consider probability density function $\rho_m(x_t,t)$ as a function of the first argument,
considering $t$ as a parameter.
In Flow Matching theory, the time variable is also may be treated as a random variable, so the interpretation of the $\rho_m(x_t,t)$ function as a joint density distribution is legitimate.
However, in our calculations, we have only the integral over time as an external integral and do not give the variable $t$ the meaning of a random variable (unlike, of course, the random variables corresponding to the initial and target distributions).
* ``Have the authors tried... ``
Yes, Table 4 summarizes the results for the case when neither initial nor final distributions are known,
but only point clouds are given.
The moons type was considered as the initial distribution in some of this experiments.
In this case, we work not with the original formula for the vector field, but with its modification Eq. (51).
For the original version of the discrete loss Eq. (14) with vector field given as Eq. (16),
we tried other distributions as $\rho_0$, in particular, the uniform distribution.
But such replacement did not give any advantages, moreover, the use of Gaussian distribution allows us to use rather fast and accurate SoftMax function realisation.
Thus, we did not investigate in this direction and give no numerical results in the paper.
* ``Rather than explaining...``
Thank you for the suggestion.
We believe the detailed algorithms in Appendix C offer a comprehensive understanding of the training process.
But it did not fit into the main text along with its description due to space limitations. Since the paper is focused on theoretical calculations, we decided to remove the practical algorithm to the Appendix in this version of the article.
We will return it to the main text if it is possible to use an additional page.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing the notation issue. While the improvement is noted, I believe the paper could benefit from further refinement in terms of overall organization, writing clarity, and consistent notation. As such, I will be maintaining my current score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer. We acknowledge the areas identified for potential improvement and will carefully consider these points for future revisions. | Summary: This works aims at producing a lower variance loss for flow matching. This is done by using the formula for the ground truth marginal velocity field, estimating it using self-normalized importance sampling, then regressing onto this estimated marginal velocity field.
Strengths: Variance reduction for CFM is a useful avenue of research
Weaknesses: - This paper lacks polish. I felt this paper is cumbersome to read, with a lot of heavy notation that can be drastically simplified, whereas the proposed algorithm is quite simple.
- The proposed approach is not exactly novel. The "Explicit Flow Matching" objective is just the original "Flow Matching" objective where we regress onto the optimal velocity field. The practical implementation being proposed is a simple application of self-normalized importance sampling.
- Importantly, the proposed approach leads to a biased objective (when estimated through a minibatch) where the optimum is not guaranteed to be the correct velocity field. This is not a problem for the low-dimensional experiments but importance sampling becomes more problematic in high dimensions.
Technical Quality: 3
Clarity: 1
Questions for Authors: The ExFM objective (Eq 8 in this paper) is the same as the FM objective [Ref. 1, Eq 5] where u_t is given by the marginal velocity field [Ref. 1, Eq 8], equivalent to Eq 10 in this paper. That this FM objective has the same gradient as CFM (Theorem 2.1 in this paper) is also stated in [Ref. 1, Theorem 2].
Given this, I don't think ExFM should be treated as a new loss.
The use of self-normalized importance sampling (SIS) for estimating the marginal velocity field could be interesting in its own right. However, it is misleading to state that this objective can reach zero because of the need for estimating the marginal velocity field. SIS introduces bias in favor of reducing variance. There should be some analysis on the bias introduced by the objective, comparing the learned model to the marginal velocity field.
[1] "Flow matching for generative modeling"
Confidence: 4
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: The proposed ExFM objective is equivalent to the original FM objective, and the paper should not over-claim this contribution. This objective is intractable, so the use of smart estimation methods is interesting in its own right.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer, thank you very much for your analysis of our work and the specific notes you
formulated! If you have any additional questions, we will be happy to answer them and make
additional edits to the text of the manuscript to improve the quality of the presentation.
**Let us discuss your comments:**
* `` This paper lacks polish... ``
We thank the reviewer for their valuable feedback. We agree that clarity is essential, and we have carefully revised the text to enhance readability.
We make several changes in the notation which we put in 1-page PDF document in the general answer.
While the proposed algorithm is indeed straightforward, the core contributions of this paper lie in the theoretical analysis of Flow Matching.
This analysis necessitates a certain level of mathematical notation to ensure rigor and precision. We have made every effort to simplify notation where possible without compromising the clarity of the proofs. We believe that a rigorous analysis is essential for the broader community to understand the algorithm's implications and potential extensions.
We emphasize, as we write about it in Introduction, that the new loss we obtained is only a consequence of our analysis and not the main result. Of course, this loss itself could have been obtained more quickly, but the point of our study is to develop a method that allows one, for example, to obtain the various modifications of the exact formulas collected in Table 1.
* `` The proposed approach is not exactly novel... ``
We thank the reviewer for their insightful comments. We agree that our proposed loss function can be seen as a special case of the Flow Matching objective.
Analogs of our formula are given in other papers and we have cited these papers.
However, to the best of our knowledge, the points listed in the general response to the reviewers are new and represent our main contributions.
Thus, our investigation goes far beyond presenting a single formula for a vector field like Eq (15) or a single practical algorithm.
* `` Importantly, the proposed approach leads to a biased objective...``
We appreciate the reviewer's concern about the potential bias of our objective function in high-dimensional settings.
Indeed, SIS may possess bias. To circumvent the issue, in addition to using SIS directly, we use rejection sampling, which removes the denominator in the velocity formula, see Appendix B ''Estimation of integrals''.
Thus, our variance reduction method has a non-bias modification as well.
Note that no bias was observed in the experiments using SIS, so we performed most of the experiments using this technique, as it was computationally more efficient.
We have added an explicit mention of rejection sampling in the main text to avoid misunderstandings.
In addition to this, as correctly pointed out, flow matching approaches, including extensions based on Optimal Transport (OT), generally suffer from challenges in high dimensions.
While we acknowledge the limitations of our current approach in handling extremely high-dimensional data and long time horizons, we believe that our work provides valuable insights into the theoretical underpinnings of the problem. We have conducted a thorough analysis of the bias in our objective, as detailed in Section 2.4, and demonstrated the effectiveness of our method in lower-dimensional settings. We consider extending our approach to higher dimensions and longer time horizons as promising future work.
**Answering your questions:**
* ``The ExFM objective is the same as the FM objective ... ``
We agree that there are similarities between ExFM and FM objectives. But as author in [1] (Lipman et al. 2022) said in the discussion of this loss ``Flow Matching is a simple and attractive objective, but naїvely on its own, it is intractable to use in practice since we have no prior knowledge for what an appropriate $p_t$ and $u_t$ are.''
Our work provides an explicit, integral form of the FM loss, enabling direct algorithm development based on the discrete loss Eq. (14).
Moreover, Theorem 2.1 and 2.5 offer novel insights into the relationship between ExFM and CFM, demonstrating its potential advantages such as smaller variance.
* ``The use of self-normalized importance sampling...``
We agree that if we evaluate the integral over samples, this evaluation may not be accurate, and thus the discrete loss (for example, $L^d_{\hbox{\tiny ExFM}}$ in Eq. (14)) does not reach zero at the minimum. However, we were talking about reaching zero in the context of the exact value of the integral, i.e., in the case when we consider the loss like $L_{\hbox{\tiny ExFM}}$ in Eq. (8),
where the expression through the integrals is subtracted from the expression for the model $v_\theta$ under the norm.
In this case, at the exact value of the vector field, the integrals (which correspond to mat. expectations) yield zero in the final result.
And as we stated above, in addition to SIS, we proposed other, non-biased, ways of evaluating the integral in Appendix B.
Some of our experiments have shown that using SIS gives no bias and performs even slightly better than our proposed rejection sampling based estimation. However, comparison of these methods is orthogonal to the main goals of our paper.
Note that since our work has a theoretical focus (and we posted the paper to the area "Learning theory"), it was not our goal to find the best algorithm for approximate calculation of this integral. On the contrary, we believe that besides the SIS and unbiased rejections sampling we have proposed for the example, there are many other ways to improve the calculation of the integral in question (for example, the cited in our paper [3] Gabriel Cardoso et al. “BR-SNIS: Bias Reduced Self-Normalized Importance Sampling”).
We hope that our proposed ideas will allow one to further modify existing algorithms or build new ones to work more efficiently. | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable feedback and insights. To enhance the clarity and comprehensiveness of our work, we have identified key areas where multiple reviewers expressed similar questions or critiques. We will provide detailed explanations and refinements in the updated version of the paper.
* We emphasize that our paper is theoretical (chosen area is Learning Theory), and the main results consist in the methodology of deriving various formulas for training a vector field, some of which are collected in Table 1 in the main paper, strictly proving the lower variance of the proposed loss, obtaining various modifications of the formula for training, including the case when neither the initial nor the final distributions are known, but only samples are given in the cases with stochastic addition in the equation. The obtained algorithm is rather a practical illustration of our theoretical studies than the main result. This algorithm itself could have been obtained in a simpler way. We also emphasize that the significance of the loss lies in its derivation from a rigorous analysis of flow matching, which enables us to develop a deeper understanding of the underlying dynamics and to explore various modifications (as exemplified in Table 1).
The analysis goes beyond simply introducing a new loss function and includes a detailed examination of bias and variance, culminating in the derivation of an optimal velocity formula. We believe that this theoretical foundation is essential for advancing the field of continuous flows and informing the development of future algorithms.
We agree that our proposed loss function can be seen as a special case of the Flow Matching objective.
Analogues of our formula are given in other papers and we cited some of them.
However, to the best of our knowledge,
a) a learning algorithm that uses 2 different batches has not been published before;
b) we propose to use not only SIS, but for example rejection sampling or bias-reduced versions of SIS ([3], Gabriel Cardoso et al.), as written in Appendix B;
c) an explicit Flow Matching solution for the trajectories Eq (36) for the Gaussian->Gaussian case has not been obtained before;
d) an explicit expression for the vector field Eq. (37) for the Gaussian->Gaussian Mixture case was not obtained before;
e) there was no analysis of the trajectories in the case of an additional stochastic term as in Appendix E before;
f) we consider a class of invertable conditional maps and consider rigour limit to get the results, valid for the non-invertable simple linear map ($x_0 (1-t) + x_1 t$);
etc.
Thus, our investigation goes far beyond presenting a single formula for a vector field like Eq (15) or a single practical algorithm.
* While our paper is mostly focuses on the theoretical aspects, as a prove of concept we provide a more comprehensive evaluation, we have incorporated supplementary visualizations and expanded our analysis. Our variance reduction is less evident in certain tabular datasets, notably BSDS. To address this, we've included detailed visualizations of loss and metric evolution (Figures 7-11) which clearly demonstrate the variance reduction achieved during training. We have significantly enhanced the paper by incorporating additional analyses, visualizations, and datasets. This includes new experiments on 2D toy data using Energy Distance and loss over steps, as well as expanded visual comparisons (can be viewed in the supplementary 1-page PDF) and density comparisons of distributions. Our experimental results consistently demonstrate that our method surpasses the performance of CFM. Notably, it also matches or exceeds the performance of OT-CFM, a model explicitly tailored to variance reduction, in the majority of our evaluations.
* We corrected inaccuracies in highlighting in the NLL metrics table, adjusted the format, but kept e-notation because we believe that maintaining e-notation offers a more compact and informative representation. The revised manuscript now includes a more in-depth analysis comparing CFM, OT-CFM, and ExFM using Wasserstein distance (Table 1 in 1-page PDF and Energy Distance (Table 2 in 1-page PDF).
* We have significantly enhanced the paper with new analysis for toy 2D data. This includes plots illustrating the relationship between energy distance and steps, loss and steps, as well as expanded visual comparisons (Figure 1 in 1-page PDF) and comparison of distributions' densities. Here we provide the most important results of additional toy 2d data visualisations. Other mentioned additions are in the revised version of the paper. But from our experiments we can see the tendency that our method outperforms CFM in all cases and in most cases OT-CFM, which was specifically built to reduce variance.
* We sincerely apologize for any typos or misleading information present in the previous version of the paper. We have conducted a thorough review and made necessary corrections to ensure accuracy and clarity. We appreciate the reviewer's diligence in identifying these issues.
By addressing these points, we aim to demonstrate the value of our work in providing a strong theoretical foundation for flow matching, while also acknowledging the importance of clear communication and accurate presentation.
Pdf: /pdf/6c693925cb69d03bb450f1266befe449511c559c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Gradient Rewiring for Editable Graph Neural Network Training | Accept (poster) | Summary: Model editing involved fine-tuning a pre-trained model specifically on training examples where it cannot predict the correct output in order to correct these errors. While model training has been explored for CV and NLP models, GNNs pose a unique challenge due to the unordered data type and node-level classification problem. This paper attempts to rectify this error by proposing a Gradient Rewiring+ (GRE+) method for Editable graph neural network training while preserving the original train/test set performance. GRE+ is evaluated on several small and large classification datasets.
Strengths: - The proposed method aims to rectify errors that still persist after training, without compromising the original performance. To quantify this, the authors report three metrics: The accuracy after model editing has been performed, the DropDown metric measuring how this impacts the original performance and the success rate of the method at actually rectifying the error. The three of these provide a clear, multi-faceted view of the effect of the proposed method against the baselines.
- In most situations, the proposed GRE+ proves to be the best method.
- The introduction and motivation of the paper is generally well-written and easy to follow.
Weaknesses: - Biggest difference is the presentation and decision to treat GRE and GRE+ as different methods to compare to baseline approaches. This is because the experimental results in Tables 1 and 2 show that GRE on its own is not really noteworthy as it fails to consistently rank as the 2nd best behind GRE+ and is outperformed by GD and ENN on numerous occassions.
- This hampers the writing in the method which focuses on what GRE does first, before expanding to talk about GRE+ with more specifics. In contrast to this writing decision, it would be better if GRE and GRE+ were not treated separately, GRE meant GRE+ and instead of having 2 rows for your method in Tabs 1/2, you just had GRE(+), then later did an ablation study (where one option is the GRE config as mentioned in the submitted paper) to show the efficacy/necessity of the whole method.
- Line 243 This is the first time the word "Transformers" is mentioned in the paper and its jarringly brought up out of the blue. The introduction broadly mentioned that model editing exists for CNNs and NLP tasks but not specifically the Transformer architecture. This prose should be deeply revised before the paper is publication ready.
- There is no analysis/study of the cost/complexity of the model editing methods which is a major weakness. Looking at Fig 3 while it is true that GRE+ improves little/none over GRE, in Tab 1/2 the story is much different. There does not seem to be a situation where GRE is better so I must question why it is presented as a distinct approach itself compared to GRE+. Is it because it is computationally less expensive?
- Some statements are not properly substantiated with citations:
- L105 "It is well-known that model editing incurs training performance degredation"
- L167 "Since shallow GNNs model performs well in practice" seems like a handwave. Also, spelling/grammar check.
- Other nitpicks:
- Tab 1 caption mentions "OOM" for some methods but that is only used in entries for Tab - Fig 2 should have larger fonts it is very hard to read.
- L242, should read ", respectively." when enumerating like that.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Fig 3 where are results on ENN?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: A limitations section is provided in the supplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the constructive and insightful comments. We carefully revised the manuscripts based on all reviewers' comments. Please see the revised manuscript at https://anonymous.4open.science/r/Gradient_rewiring_editing-E16E/GRE_NeurIPS24.pdf.
**[W1: Presentation on GRE and GRE+]**
Ans: hank you for the constructive comment. We respectfully disagree with the assertion that GRE isn't consistently ranked as the second-best method behind GRE+. We believe that treating GRE and GRE+ as different methods in our writing is reasonable.
GRE achieves second-best performance in **both sequential editing and batch editing**, while demonstrating competitive performance in independent editing. In the independent editing task, all methods can achieve a high success rate since it is relatively simple to correct one target sample. However, GRE still secures second-best performance in several instances, such as on the ogbn-arxiv and ogbn-products datasets in GCN. More importantly, GRE excels in more complex editing scenarios, such as sequential editing and batch editing. For sequential editing, GRE achieves second-best performance across various datasets, as shown in **Figures 3 and 6**. Similarly, for batch editing, GRE also ranks second-best, as evidenced by the results in **Tables 6 and 7**.
In summary, GRE shows significant improvement over baselines in sequential and batch editing. From a technical perspective, GRE provides a closed-form solution for gradient rewiring, while GRE+ requires a numerical QP solver. Therefore, we treat GRE and GRE+ separately due to the significant performance improvements offered by GRE and its distinct closed-form solution.
**[W2: Model editing exists for CNNs and NLP tasks but not specifically the Transformer architecture]**
Ans: To avoid confusion, we have revised the statement of Observation 1 in Section 4.2. Specifically, we removed the part about transformers and generally mentioned that all editors can effectively rectify model predictions for independent editing in the graph domain.
**[W3: No analysis/study of the cost/complexity of the model editing methods]**
Ans: For comparing time complexity and memory consumption, we measured the wall-clock editing time in milliseconds and GPU peak memory in megabytes, reporting average values over 50 independent edits in **Appendix D.5**. Due to the limited rebuttal length, we only show time complexity and memory consumption for GCN.
Our results show that the proposed method is scalable in terms of memory consumption and has manageable editing time overhead. For example, in the GraphSAGE architecture on the Flickr dataset, GRE+ (5) results in only a 13.5% increase in peak memory compared to GD. In terms of wall-clock editing time, the most time-consuming version, GRE+ (5), shows an insignificant overhead, with only a 6.31% increase on the ogbn-products dataset in the GraphSAGE architecture. These observations demonstrate the scalability of GRE+ for large datasets. It is noteworthy that model editing is usually efficient and fast, making slightly slower editing affordable while improving editing effectiveness.
| | Editor | Flickr ET (ms) | Flickr PM (MB) | Reddit ET (ms) | Reddit PM (MB) | ogbn-arxiv ET (ms) | ogbn-arxiv PM (MB) | ogbn-products ET (ms) | ogbn-products PM (MB) |
|----------------|---------------------|----------------|----------------|----------------|----------------|--------------------|--------------------|-----------------------|-----------------------|
| **GCN** | **GD** | 67.46 | 707.0 | 345.23 | 3244.8 | 94.58 | 786.2 | 2374.15 | 14701.7 |
| | **ENN** | 109.82 | 666.8 | 405.24 | 3244.8 | 242.85 | 786.2 | -- | OOM |
| | **GRE** | 63.93 | 695.8 | 391.54 | 3491.3 | 84.74 | 956.9 | 2400.78 | 17336.6 |
| | **GRE+ (2)** | 100.45 | 696.0 | 457.08 | 3493.2 | 121.11 | 957.8 | 2413.69 | 17338.7 |
| | **GRE+ (3)** | 115.29 | 697.9 | 509.44 | 3493.9 | 131.06 | 957.9 | 2471.23 | 17338.9 |
| | **GRE+ (5)** | 155.05 | 698.6 | 603.85 | 3495.6 | 162.24 | 958.3 | 2591.06 | 17339.2 |
**[W4: Add citations and other nitpicks.]**
Ans: We have added citations for several statements and revised the format and writing issues accordingly.
**[Q1: Fig 3 where are results on ENN?]**
Ans: Thank you for your constructive comment. Model editing consists of two stages: the pre-training stage, where a well-trained model is obtained, and the editing stage, where undesirable behavior is corrected. The GD, GRE, and GRE+ methods perform editing during the editing stage using the **same pre-trained model**. For these methods, a higher test accuracy drawdown implies a lower test accuracy after editing. In contrast, the key goal of the ENN baseline is to improve the model during the pre-training stage. The objective at this stage is to obtain a model that can be easily edited using gradient descent while maintaining good task performance. Since **the test accuracy of the pre-edited models varies**, it is infeasible to compare these methods in terms of test drawdown. To fairly compare all methods, we have **added a comparison of the test accuracy performance after editing for sequential editing in Appendix D.6**. It is observed that the accuracy of GRE+ is higher than all baselines.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed rebuttal and paper revisions. While I am not totally convinced by the decision to differentiate GRE and GRE+, I do find the rebuttal rational is reasonable. After reading the other reviews and responses, I am willing to raise my score to borderline accept.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer X56x
Comment: Thank you for your thoughtful review and for considering our detailed rebuttal and revisions. We appreciate your feedback on the differentiation between GRE and GRE+, and we're glad you found our rationale reasonable. We're grateful for your willingness to raise your score and value your insights in helping us refine our work. | Summary: The authors propose a model editing technique motivated by an observed inconsistency between the gradient of target and training nodes' cross-entropy losses. Their proposed method, Gradient Rewiring for Editable Graph Neural Networks (GRE), stores the anchor of the gradients of the training nodes and uses the anchors during editing to preserve performance. Finally, the authors empirically evaluate their proposed method on a collection of real-world datasets.
Strengths: - I believe the problem to be an important practical problem and that the paper is well-motivated.
- The main observation of the paper, i.e., that the gradients of the target and the training nodes are inconsistent, is interesting and novel in the context of Graph Neural Networks.
- The proposed method seems to perform well consistently. The authors have performed a comprehensive empirical evaluation, and I believe the results are solid.
- The authors are overcoming the quadratic problem in the number of model parameters by solving a dual problem with significantly fewer variables.
Weaknesses: - The method requires storing the gradients of the training dataset. This would raise the memory consumption. It would be great if the authors could include a table with the memory requirements for different models.
- There are a lot of writing/grammar issues. In the following, I will give a (non-exhaustive) list of examples of missing articles and text where the writing could generally be improved. This would generally not be a big deal, but in this case, the writing makes the text very hard to follow and comprehend. I would encourage the authors to do a few revision rounds before updating the manuscript. In my personal experience, free tools such as Grammarly also help with identifying weird formulations or grammar mistakes:
- 16-17: the gradient of *the* loss
- Line 21: "interpreting the features and topology of graph data" - *interpreting* is a somewhat unusual way to describe learning representations on graphs; please consider rephrasing.
- Line 42: "(...) intricate editing for GNNs through the lens of landscape." - this phrasing is very confusing; what landscape? I suggest the authors rephrase to something more akin to "through the lens of the loss landscape of the Kullback-Lieber divergence between the initial node features and the final node embeddings" since this seems to be the technique that [[1]] is proposing.
- Line 46: "(...) perspective, and is compatible with existing work" - please consider rephrasing to something similar to "(...) perspective, which is compatible with existing work".
- Line 49: "(...) can lead to a deterioration in the performance *of* the training nodes" -> "(...) can lead to a deterioration in the performance *on* the training nodes" - the performance is of the model on the training nodes
- Line 55: similar, "performance *of* the training nodes" -> "performance *on* the training nodes"
- The caption of Figure 1 is somewhat confusing; please consider rephrasing.
- Line 93-94: Somewhat confusing: "motivation to rewire gradients *of* model editing" or "motivation to rewire gradients *for* model editing"?
- Line 95: "and advanced version (GRE+)" -> "and *an* advanced version (GRE+)"
- Lines 98-99: "we pre-train (...) on (...) datasets" -> "we pre-train (...) on *the* (...) datasets"
- Lines 101-102: please consider rephrasing to something more akin to "we fine-tune the well-trained model using the cross-entropy loss of the target sample via gradient descent"
- Lines 106-108: "we investigate performance degradation from model gradient perspective" -> "we investigate *the* performance degradation from *a* model gradient perspective"; "we further define training loss" -> "we further define *the* training loss"; "where ... is prediction model ... CE is cross-entropy loss" -> "where ... is *a* prediction model ... CE is *the* cross-entropy loss".
- Footnote 3 is very confusing; please consider rephrasing the entire footnote.
- Lines 131-147 are also somewhat confusing and hard to comprehend due to language. Please consider rephrasing all of the paragraphs.
- Line 148: "where gradient for model prediction is defines as" -> "where *the* gradient for *the* model prediction is *defined* as". Also, the inline *\frac* is not aesthetically pleasing, but that is a very minor complaint.
- Line 161: Please consider either removing "*that*": "it is easy to obtain *that* the optimal dual variable v=..." -> "it is easy to obtain the optimal dual variable $v*$=..." or rephrasing to something more akin to "it is easy to see that the optimal dual variable is $v*$=..."
- Please also consider rephrasing the next lines, from 162 to 164.
- Missing commas on line 172: "the training loss for the whole training dataset, after model editing, is on par with (...)"
- Please consider re-phrasing lines 199-200 to something more akin to "We randomly select a node from the validation set on which the well-trained model makes a wrong prediction."
- Both Tables 1 and 2 have results with different decimal precision for the standard deviation. For instance, GraphSAGE/GRE/CORA/DD has a single decimal ($3.36 \pm 0.2$) while GraphSAGE/GRE+/CORA/DD contains two decimals ($0.41\pm 0.07$). Please consider using the same amount of decimals for all of the results. It makes the table more aesthetically pleasing and easier to read.
- "OOM" is mentioned in the caption of Table 1 as being an out-of-memory error, but there are no OOMs in Table 1. However, Table 2 contains OOMs but the caption does not explain what OOM means.
- Please consider increasing the font of the text for both Figures 1 and 2.
- Figure 4, first row, first column: the figure covers half the "N" of "GCN". The plots are also not vertically aligned on the two columns.
Overall, I believe that the proposed method is solid due to the very good empirical results. However, the writing is lacking to a degree that makes the text very hard to follow and comprehend. I would encourage the authors to revise the text significantly, with multiple rounds of proofreading.
Due to concerns about the clarity and writing, I currently recommend rejection. However, I will revise my score if the authors significantly improve the writing during the rebuttal. I think that the overall work could potentially be impactful and useful from a practitioner's perspective.
[1]: https://arxiv.org/pdf/2305.15529
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see the weaknesses above.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors are discussing limitations in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the constructive and insightful comments. We carefully revised the manuscripts based on all reviewers' comments. Please see the revised manuscript at https://anonymous.4open.science/r/Gradient_rewiring_editing-E16E/GRE_NeurIPS24.pdf.
**[W1: Add comparison in terms of editing time and memory.]**
Ans: For comparing time complexity and memory consumption, we measured the wall-clock editing time in milliseconds and GPU peak memory in megabytes, reporting average values over 50 independent edits in **Appendix D.5**.
Our results show that the proposed method is scalable in terms of memory consumption and has manageable editing time overhead. For example, in the GraphSAGE architecture on the Flickr dataset, GRE+ (5) results in only a 13.5% increase in peak memory compared to GD. In terms of wall-clock editing time, the most time-consuming version, GRE+ (5), shows an insignificant overhead, with only a 6.31% increase on the ogbn-products dataset in the GraphSAGE architecture. These observations demonstrate the scalability of GRE+ for large datasets. It is noteworthy that model editing is usually efficient and fast, making slightly slower editing affordable while improving editing effectiveness.
| | Editor | Flickr ET (ms) | Flickr PM (MB) | Reddit ET (ms) | Reddit PM (MB) | ogbn-arxiv ET (ms) | ogbn-arxiv PM (MB) | ogbn-products ET (ms) | ogbn-products PM (MB) |
|----------------|---------------------|----------------|----------------|----------------|----------------|--------------------|--------------------|-----------------------|-----------------------|
| **GCN** | **GD** | 67.46 | 707.0 | 345.23 | 3244.8 | 94.58 | 786.2 | 2374.15 | 14701.7 |
| | **ENN** | 109.82 | 666.8 | 405.24 | 3244.8 | 242.85 | 786.2 | -- | OOM |
| | **GRE** | 63.93 | 695.8 | 391.54 | 3491.3 | 84.74 | 956.9 | 2400.78 | 17336.6 |
| | **GRE+ (2)** | 100.45 | 696.0 | 457.08 | 3493.2 | 121.11 | 957.8 | 2413.69 | 17338.7 |
| | **GRE+ (3)** | 115.29 | 697.9 | 509.44 | 3493.9 | 131.06 | 957.9 | 2471.23 | 17338.9 |
| | **GRE+ (5)** | 155.05 | 698.6 | 603.85 | 3495.6 | 162.24 | 958.3 | 2591.06 | 17339.2 |
| **Graph-SAGE** | **GD** | 117.74 | 843.0 | 1024.12 | 4416.53 | 107.63 | 891.3 | 2125.07 | 13832.2 |
| | **ENN** | 134.50 | 843.0 | 2597.21 | 4416.5 | 277.29 | 891.3 | -- | OOM |
| | **GRE** | 116.03 | 952.4 | 1089.29 | 4955.4 | 100.09 | 1072.5 | 2132.02 | 16254.1 |
| | **GRE+ (2)** | 167.17 | 954.5 | 1267.13 | 4959.0 | 136.28 | 1073.7 | 2135.88 | 16255.9 |
| | **GRE+ (3)** | 176.66 | 955.5 | 1363.53 | 4960.7 | 154.29 | 1074.0 | 2211.63 | 16256.0 |
| | **GRE+ (5)** | 219.81 | 957.5 | 1603.03 | 4964.2 | 180.73 | 1075.5 | 2275.72 | 16256.3 |
**[W2: Tackle lots of writing/grammar issues]**
Ans: Thanks for the careful review. We have revised the manuscripts significantly according to your comments.
**[W3: Format issues on Table and Figures]**
Ans: We have revised **the caption and uniform decimal precision in Tables 1 and 2, increased the font of the text for both Figures 1 and 2, and revised Figure 4**.
---
Rebuttal 2:
Comment: I thank the authors for their rebuttal. I believe that the authors have significantly improved the writing for their revision. The new time and memory comparison also strengthens the paper.
Still, I would like to point out that the writing could still be improved:
- Line 298: _in this experiment, we **scrutinize** the sensitivity of our proposed method_ - in my opinion, "scrutinize" is an odd term in this context. While not wrong, I believe that "analyze" would have been a better choice.
- Line 21: I think the first sentence is still an odd way to present GNNs; "_integrating_" doesn't really fix the main issue. I would suggest that the authors revamp their first introduction paragraph entirely.
- Line 28: "_In the ideal scenario, the promising property of tackling such errors would be threefold: ..._" - something along the lines of "_An ideal method that could tackle such errors would need to have the following properties: ..._" would be, in my opinion, much clearer.
- There are still some decimal precision issues in the tables - for instance, in Tab. 1 the SR column contains both $1.0$ and $0.98$, Tab. 2 Column Flickr-ACC; 2-GCN-GD $11.0$ for the std but GCN-GRE $1.50$.
Again, this is not a comprehensive list. I would recommend that the authors do some more passes through the manuscript for the next revision.
Nevertheless, the writing has been significantly improved, and the method seems effective and practical - I am leaning towards acceptance right now, and have updated my score from a reject (3) to a borderline accept (5), with the presentation score going from poor (1) to fair (2).
I will continue watching the rebuttal discussions, and might modify the score further depending on other comments.
---
Rebuttal Comment 2.1:
Title: Response to Reviewer KXu3
Comment: We sincerely thank the reviewer for the thorough review and for carefully considering not only our responses but also our interactions with other reviewers. In response to your suggestion, we have conducted a comprehensive proofreading of the manuscript, and the revised version is now available at https://anonymous.4open.science/r/Gradient_rewiring_editing-E16E/GRE_NeurIPS24.pdf. In addition to improving the manuscript, we have also open-sourced our code at https://anonymous.4open.science/r/Gradient_rewiring_editing-E16E/README.md to enhance the reproducibility of our experiments. We kindly ask the reviewer to consider raising the score if all concerns have been satisfactorily addressed. | Summary: The work introduces a novel method called Gradient Rewiring (GRE) to address the challenge of editable training in graph neural networks. Traditional fine-tuning approaches often struggle with maintaining performance for both target and training nodes. GRE aims to overcome this limitation by rewiring gradients in a way that preserves locality, leading to improved performance for both types of nodes. The method is designed to enhance the training process in graph neural networks by effectively updating node representations while maintaining the network's overall structure.
Strengths: 1. The method of the paper is clear, intuitive, and easy to implement.
2. The writing is clear, making it easy to understand and read.
Weaknesses: 1. Although the experiments were conducted on graph datasets, the proposed method is not specifically designed for graphs. This general approach can be tested on various tasks.
2. There are noticeable formatting errors and typos in the paper.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. It would be beneficial to provide a more detailed explanation of the experimental process and results in Figure 1. In the first row of Figure 1, the images indicate that the gradients for testing and training become very similar, suggesting that the test loss and train loss should decrease simultaneously. However, in the second and third rows, the test loss decreases while the training loss increases. Please further explain this observation.
2. The training + rewriting pipeline is similar to curriculum learning, where the network is trained in two phases: first on simple samples, then on more difficult ones. I recommend the authors conceptually compare their method with curriculum learning. Additionally, I suggest experimenting with batch data rewriting, which might be more practical compared to rewriting individual samples or sample sequences.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The limitations has been discussed in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the constructive and insightful comments. We carefully revised the manuscripts based on all reviewers' comments. Please see the revised manuscript at https://anonymous.4open.science/r/Gradient_rewiring_editing-E16E/GRE_NeurIPS24.pdf.
**[W1: This general approach can be tested on various tasks.]**
Ans: Thank you for your insightful observation. We agree that while our experiments were conducted on graph datasets, the proposed gradient rewiring method is not inherently specific to graphs, the gradient rewiring method is particularly **suitable in the graph domain due to the small model size**. Specifically, graph models are typically a few layers and thus are smaller in model size compared to models (e.g., Transformers) used in NLP and CV tasks. This results in lower computational and storage costs for gradients, making our strategy particularly suitable for the graph domain. Additionally, it is more challenging to edit nodes in a graph due to the **inherent propagation process within neighborhoods**. Such propagation may lead to significant gradient discrepancies within the graph domain.
In this work, we primarily focus on **gradient rewiring within the graph domain**, exploring multiple editing manners (i.e., independent editing, sequential editing, and batch editing). **We acknowledge the potential applicability of our gradient rewiring strategy to other domains and leave its exploration for future work in Appendix B**.
**[W2: There are noticeable formatting errors and typos in the paper.]**
Ans: We have carefully revised the manuscript. Please check the revised manuscript for the details.
**[Q1: Provide a more detailed explanation of the experimental process and results in Figure 1.]**
Ans: Thank you for the insightful comment. Gradient similarity does not necessarily imply consistent test and training loss. Loss consistency primarily depends on the model parameters rather than the first-order gradient. Therefore, there are cumulative effects of gradient inconsistency; significant initial gradient discrepancies can lead to substantial differences in model parameters, resulting in inconsistent training and test loss. We have added a more detailed explanation in Figure 1.
**[Q2a: Conceptually compare their method with curriculum learning]**
Ans: Thanks for the insightful comment. We have added the discussion with curriculum learning in **Appendix~F**. Here is the discussion:
Curriculum learning and model editing are two distinct approaches in the field of machine learning. Curriculum learning is an approach where the network is trained in a structured manner, starting with simpler tasks and gradually introducing more complex ones. This method aims to improve the learning process by mimicking how humans learn. Model editing is a fast and efficient approach to patch the well-trained model prediction for several failed test cases. Although both are multi-stage training stages, there are several key differences:
(1) **Goals**: Curriculum Learning aims to improve the overall learning process by structuring the training data in a way that mimics human learning. In contrast, model editing aims to make targeted adjustments to a pre-trained model to correct undesirable behaviors. (2) **Approach**: curriculum learning mainly focuses on the sequence and complexity of the training data. Model editing typically modifies the model's parameters or architecture to correct undesirable behavior goals.
(3) **Additional information in the multi-stage process**. Model editing requires failure feedback for well-trained models as the target samples to patch, e.g., test failure cases after production is launched. In other words, such feedback can only be obtained after model pertaining. In curriculum learning, all information is given in multi-stage training. In summary, curriculum learning focuses on structuring the training process to improve overall learning, while model editing focuses on making targeted adjustments to a pre-trained model to correct specific behaviors. Both approaches can be complementary and used together to achieve better model performance.
**[Q2b: More experimental results on batch editing.]**
Ans: We have added experimental results on batch editing in **Appendix D.4 (Tables 6 and 7)**. We observe that, compared to independent editing, batch editing is more challenging as all batch samples need to be patched simultaneously. Additionally, GRE and GRE+ both demonstrate significant performance improvements compared to GD and ENN on various model architectures and datasets.
---
Rebuttal Comment 1.1:
Comment: Thank you to the reviewer for the constructive comments and positive outlook on our paper. I wanted to kindly remind the reviewer that the discussion period is nearing its end, and I would greatly appreciate any additional feedback. Your insights have been invaluable in refining this work, and we are eager to address any remaining concerns before the final decision. | Summary: This paper tackles the challenge of editing GNNs. The authors highlight a key issue in GNN editing: the gradient inconsistency between target and training nodes, which can degrade performance when the model is fine-tuned using only the target node’s loss. To address this, they introduce the Gradient Rewiring (GRE) method, which preserves the performance of training nodes by storing an anchor gradient and rewiring the target node’s loss gradient accordingly. The efficacy of GRE is validated through experiments across various GNN architectures and graph datasets, demonstrating its potential to enhance model adaptability without compromising existing accuracies.
Strengths: - This paper is well-motivated and well-organized, addressing a research question that remains to be further investigated. The provided visualization also aids in understanding the motivations behind the study.
- The authors offer a detailed derivation process, making the methodology easy to follow.
- The method was tested on large-scale datasets, demonstrating the model's scalability.
Weaknesses: - The significance of editing graph neural networks remains unclear. Editing seems to lead to significant performance degradation, especially on large-scale datasets. The authors should discuss why such significant efforts are warranted to modify predictions on individual samples.
- I recognize that considering graph neural network editing from the perspective of gradient rewriting is novel, but the authors have only discussed model editing in related work. The lack of gradient modification literature makes it hard to position this paper.
- Some experimental details are missing, such as dataset splitting. These details are crucial for evaluating the proposed method.
- The time complexity is not provided.
- Some minor issues (e.g., a superscript linking to nothing in the caption of Figure 1).
Technical Quality: 3
Clarity: 2
Questions for Authors: - In what scenarios would people be willing to accept a substantial overall performance drop in order to modify predictions for individual samples?
- Are there any methods developed for editing neural networks on non-graph data based on gradient modification? Or are there any works on gradient modification that are relevant to the method in this paper?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: As the authors discussed in Appendix B.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the constructive and insightful comments. We carefully revised the manuscripts based on all reviewers' comments. Please see the revised manuscript at https://anonymous.4open.science/r/Gradient_rewiring_editing-E16E/GRE_NeurIPS24.pdf.
**[W1: The significance of editing graph neural networks remains unclear]**
Ans: Model editing is crucial for ensuring that machine learning models remain reliable and effective post-deployment. This practice addresses high-profile failure cases and misbehaviors that emerge after a model's initial training, often brought to attention through user feedback.
In general, there are **editing-worthy scenarios, such as high-profile failures and prioritizing critical mistakes**. For example, users may try to prompt large language models (LLMs) to misbehave (e.g., giving criminal advice, system prompt, etc.) to induce "high-profile failures." In the context of self-driving cars, misclassifying a child as a cat poses far greater risks than misclassifying a cat as a dog. Although graph applications typically don't have this kind of direct interaction with users and are not as intuitive, they certainly have high-stake scenarios, such as **patient readmission [1] and flood prediction [2]**, which warrant the study of model editing.
In short, while model editing has been widely explored in computer vision (CV) and natural language processing (NLP) tasks, it has rarely captured attention in the graph learning community. It is indispensable to investigate graph model editing for high-stake graph applications.
[1] Predicting patient readmission risk from medical text via knowledge graph enhanced multiview graph convolution. SIGIR 2021
[2] Kazadi, Arnold N., et al. "Flood prediction with graph neural networks." Climate Change AI. Climate Change AI (2022).
**[W2: Please discuss gradient modification literature in related work]**
Ans: We have added a discussion on the literature regarding gradient modification in continual learning and meta-learning. Please see more details in Appendix~E.
**[W3: More experimental details such as data spliting]**
Ans: For all datasets, we first randomly split the data into train, validation, and test sets. Specifically, we ensure that each class has 20 samples in the training set and 30 samples in the validation set. The remaining samples are used for the test set. The target node is randomly selected multiple times from the validation set where the well-trained model makes incorrect predictions.
**[W4: The time complexity is not provided.]**
Ans: For comparing time complexity and memory consumption, we measured the wall-clock editing time in milliseconds and GPU peak memory in megabytes, reporting average values over 50 independent edits in **Appendix D.5**. Due to the limited rebuttal length, we only show time complexity and memory consumption for GCN.
Our results show that the proposed method is scalable in terms of memory consumption and has manageable editing time overhead. For example, in the GraphSAGE architecture on the Flickr dataset, GRE+ (5) results in only a 13.5% increase in peak memory compared to GD. In terms of wall-clock editing time, the most time-consuming version, GRE+ (5), shows an insignificant overhead, with only a 6.31% increase on the ogbn-products dataset in the GraphSAGE architecture. These observations demonstrate the scalability of GRE+ for large datasets. It is noteworthy that model editing is usually efficient and fast, making slightly slower editing affordable while improving editing effectiveness.
| | Editor | Flickr ET (ms) | Flickr PM (MB) | Reddit ET (ms) | Reddit PM (MB) | ogbn-arxiv ET (ms) | ogbn-arxiv PM (MB) | ogbn-products ET (ms) | ogbn-products PM (MB) |
|----------------|---------------------|----------------|----------------|----------------|----------------|--------------------|--------------------|-----------------------|-----------------------|
| **GCN** | **GD** | 67.46 | 707.0 | 345.23 | 3244.8 | 94.58 | 786.2 | 2374.15 | 14701.7 |
| | **ENN** | 109.82 | 666.8 | 405.24 | 3244.8 | 242.85 | 786.2 | -- | OOM |
| | **GRE** | 63.93 | 695.8 | 391.54 | 3491.3 | 84.74 | 956.9 | 2400.78 | 17336.6 |
| | **GRE+ (2)** | 100.45 | 696.0 | 457.08 | 3493.2 | 121.11 | 957.8 | 2413.69 | 17338.7 |
| | **GRE+ (3)** | 115.29 | 697.9 | 509.44 | 3493.9 | 131.06 | 957.9 | 2471.23 | 17338.9 |
| | **GRE+ (5)** | 155.05 | 698.6 | 603.85 | 3495.6 | 162.24 | 958.3 | 2591.06 | 17339.2 |
**[W5: Some minor issues (e.g., a superscript linking to nothing in the caption of Figure 1).]**
Ans: We have carefully revised the manuscript. Please check the revised manuscript for the details.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications. Most of my concerns have been addressed. However, I would suggest releasing the code in the discussion phase to make your experiments convincing. For now, I maintain my original score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer VRYA
Comment: Thank you once again for your constructive feedback. We are pleased that our response has addressed most of your concerns. In line with your suggestion, we have now open-sourced our code at https://anonymous.4open.science/r/Gradient_rewiring_editing-E16E/README.md. We hope that this will enhance the transparency and reproducibility of our experiments, and we kindly ask the reviewer to consider raising the score if all concerns have been satisfactorily addressed. | Rebuttal 1:
Rebuttal: # A summary of our rebuttal.
We thank all reviewers for your valuable time and comments. We are glad that many reviewers found the following:
* **Our paper is well-motivated and well-organized**
* R `VRYA`: This paper is well-motivated and well-organized, addressing a research question that remains to be further investigated.
* R `KXu3`: I believe the problem to be an important practical problem and that the paper is well-motivated. The main observation of the paper, i.e., that the gradients of the target and the training nodes are inconsistent, is interesting and novel in the context of Graph Neural Networks.
* R `X56x`: The introduction and motivation of the paper are generally well-written and easy to follow.
- **Our method is clear, intuitive, and easy to follow.**
* R `VRYA`: The authors offer a detailed derivation process, making the methodology easy to follow.
* R `LHi3`: The method of the paper is clear, intuitive, and easy to implement.
- **The experimental results are solid and strong.**
* R `VRYA`: The method was tested on large-scale datasets, demonstrating the model's scalability.
* R`KXu3`: The proposed method seems to perform well consistently...., and I believe the results are solid.
* R`X56x`: In most situations, the proposed GRE+ proves to be the best method.
On the other hand, aside from some cosmetic suggestions, the reviewers have brought up the following suggestions:
- **Editing time and peak memory comparison** (R `VRYA`, `KXu3`, `X56x`): We conduct editing time and peak memory comparison in ** Appendix D.5**.
- The **significance of editing graph neural networks** remains unclear (R `VRYA`): there are multiple editing-worthy scenarios, such as high-profile failures and prioritizing critical mistakes. Two specific applications such as patient readmission and flood prediction.
- **Batch editing experimental results** (R `X56x`): The batch editing results are shown in **Appendix D.4** (Tables 6 and 7).
- ENN results in sequential editing (R `X56x`): The test accuracy comparison results are shown in **Appendix D.6** (Figure 6).
- This general approach can be tested on various tasks (R `LHi3`): (1) the gradient rewiring method is particularly suitable in the graph domain due to the small model size. (2) it is more challenging to edit nodes in a graph due to the inherent propagation process within neighborhoods. (3) acknowledge the potential applicability of our gradient rewiring strategy to other domains in **Appendix B**.
- Related work on gradient modification (R `VRYA`): We add Related work on gradient modification in **Appendix E**.
- Detailed explanation for Figure 1 (R `LHi3`): We add a more detailed discussion in Section 3.1.
- Discussion on the differences with curriculum learning (R `LHi3`): We have added the discussion with curriculum learning in **Appendix~F**.
Based on all reviewers' comments, we carefully revised the manuscript. Please see the revised manuscript at https://anonymous.4open.science/r/Gradient_rewiring_editing-E16E/GRE_NeurIPS24.pdf. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Training-Free Open-Ended Object Detection and Segmentation via Attention as Prompts | Accept (poster) | Summary: This paper proposes a training-free open-ended object detector that leverages VLM(CogVLM) to recognize and roughly locate objects(through the attention map) and prompts SAM by the coarse points results. To generate accurate point prompts, VL-SAM utilizes techniques like head aggregation, attention flow, attention score regularization, iterative refinement, and scale/prompt ensemble. VL-SAM achieves state-of-the-art performance on open-ended LVIS and CODA. The ablation proves that each component can improve the detection performance.
Strengths: Combining already trained large models to achieve tasks that each cannot accomplish individually is a promising direction. The techniques proposed by the authors—head aggregation, attention flow, attention score regularization, iterative refinement, and scale/prompt ensemble—are all intuitively effective methods for improving zero-shot performance.
Weaknesses: 1. Speed. For each image, following the VL-SAM approach requires first passing through CogVLM-17B ten times, with each pass covering five scales (full image + 4 sub-images). Afterward, attention needs to be stored and calculated to obtain the prompt. Each prompt then requires iterative refinement, with each refinement step involving multiple passes through the SAM-huge model. Each step may also require post-processing like NMS. Such heavy and non-parallelizable computation raises curiosity about the model's FPS. Is the performance gain really worth it?
2. Unclear experimental details. Table 3 indicates that the highest improvements come from question ensemble and multi-scale operations. Firstly, these operations cannot be considered core contributions of VL-SAM, as they are widely used tricks. Secondly, examples of question prompts are not provided. Given that question ensemble improves performance significantly, additional analysis is required to understand why.
3. SOTA performance. On the LVIS minival dataset, the authors primarily compare against GenerateU, as mentioned on line 114. DetClipV3 also proposed an open-ended setting and achieved higher performance, yet this is not reflected in Table 1. Additionally, it should be noted that GenerateU used different types of CLIP text encoders for training and evaluation. In my attempt to unify both for retraining, I achieved approximately 25 box AP on rare classes, which is close to the 23.4 performance mentioned in the paper (though this will not affect my scoring, as it is not an issue with VL-SAM). On the CODA dataset, VL-SAM achieved performance far exceeding other methods. However, considering that the evaluation metric is AR, a natural question arises: what would be the AR if SAM's segment-anything mode (uniform sampling) was used? This is necessary to demonstrate the need for VL-SAM. Additionally, in Table 3, VL-SAM without the two ensemble methods only achieves 14.1 performance on CODA, while using both methods introduces significant speed overhead (see point 1).
4. Parameter: VL-SAM defaults to using some of the largest foundation models (SAM-Huge, ViT-E). Is the comparison of VL-SAM's parameter count to other methods fair? Additionally, what would VL-SAM's performance be if smaller-scale models were used?
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. types: line 44 as a pure visiothe n model
2. What is the question-prompt, and how does it affect the final performance?
3. How much would VL-SAM's performance be affected by the model size?
4. see weakness.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: yes, the authors has adequately discussed the potential negative societal impact of their work and limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: Speed and parameter.
As we discussed in the limitation, the speed and parameter problem can be gradually overcome by recent lightweight models. Besides, the main purpose of this paper is to provide a **feasible solution** to address the open-ended perception challenge; thus we do not consider the speed and parameters right now.
Additionally, we analyze the effect of SAM‘s parameters on VL-SAM, as shown in the following table. We can find that a larger model achieves better performance for VL-SAM. Besides, when parameters are reduced from 636 M to 91 M of SAM backbone, the performance of VL-SAM only drops 2.2 mAR. This demonstrates the effectiveness of the proposed framework once again.
| SAM backbone | Parameters | CODA mAR |
| :----------- | ---------- | -------- |
| VIT-H | 636 M | 40.1 |
| VIT-L | 308 M | 39.3 |
| VIT-B | 91 M | 37.9 |
### Q2: Details and analysis of prompt generation.
We directly ask VLM to generate question prompts by itself without given images. Specifically, we use the question:
```
If we want you to list all possible objects in the given image, what questions should we ask? Please give 10 questions you prefer.
```
Here is the answer from CogVLM:
```
(1) Please analyze the image and list all the objects present.
(2) Identify and provide a comprehensive list of objects in the image.
(3) Using the image, generate a detailed inventory of all objects visible.
(4) Analyze the image and extract information about any objects present.
(5) Please describe all the objects you can identify in the image.
(6) From the image, generate a report listing all objects detected.
(7) Utilizing the image, identify and report all objects visible.
(8) Analyze the image and provide a comprehensive breakdown of all objects seen.
(9) Please process the image and generate a detailed list of all objects present.
(10) Using the image, identify and describe all objects observed.
```
Moreover, as Reviewer 6VtQ mentioned, we provide the average recall (AR) of VLM for object name generation in the following table. We find that prompt ensemble significantly alleviates the object missing problem in VLM. Thus, it can improve overall performance.
| Dataset | Prompt Ensemble | $AR_{0.5}$ | $AR_{0.7}$ | $AR_{0.9}$ |
| :------ | --------------- | ---------- | ---------- | ---------- |
| LVIS | $\times$ | 0.973 | 0.932 | 0.404 |
| LVIS | $\checkmark$ | 0.988 | 0.984 | 0.604 |
We will add the discussion to the paper.
### Q3: AR of SAM's segment-anything mode (uniform sampling).
We use SAM's segment-anything mode to obtain box proposals and calculate the class-agnostic AR. The AR is 29.7. However, the boxes predicted by SAM do not contain category results. In contrast, VL-SAM can predict boxes with categories.
---
Rebuttal Comment 1.1:
Comment: I have read the all the rebuttal from the authors and review from other reviewer. I appreciate the author's efforts. One of the motivations for open-ended object detection is to make object detection more applicable in real-world scenarios. While speed might not be the primary concern, other open-ended methods in the literature also use heavy decoders(compared to MLPs in traditional detector) to generate class names, and the operations proposed in this paper are clearly associated with significant delays. As a technical paper, it is common to consider the trade-offs between speed and accuracy. I do not wish to undermine the advantages of the proposed training-free open-ended approach, but in the absence of other exceptional contributions, and given that proposed operations such as "Iterative Refine" and "Multi-scale" evidently introduce delays, a detailed discussion is essential. I will raise my score to Borderline, but with a slight inclination towards rejection.
---
Reply to Comment 1.1.1:
Comment: Thanks for your valuable time and your reply. We respond to your remaining concerns below:
Q: Consider the trade-offs between speed and accuracy
A: We agree that many technical papers consider the trade-offs between speed and accuracy. However, there are many papers that only consider developing high-accuracy detectors, including InternImage[1], Co-DETR[2], and CBNet[3]. Besides, high-accuracy detectors (including our method) can be applied to offline scenarios, such as auto-labeling for autonomous driving data.
Thus, we argue that the proposed method can be a feasible solution to address the open-ended perception challenge and should not be ignored by the community.
We sincerely thank you for your active involvement in the discussion and hope to hear more from you about your concerns and kind suggestions.
[1] Wang, Wenhai, et al. "Internimage: Exploring large-scale vision foundation models with deformable convolutions." *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*. 2023.
[2] Zong, Zhuofan, Guanglu Song, and Yu Liu. "Detrs with collaborative hybrid assignments training." *Proceedings of the IEEE/CVF international conference on computer vision*. 2023.
[3] Liang, Tingting, et al. "Cbnet: A composite backbone network architecture for object detection." *IEEE Transactions on Image Processing* 31 (2022): 6893-6906. | Summary: This work proposes a novel approach for the so-called open-ended detection (and segmentation) problem, which is about localizing and naming objects in a given image without the user having to specify any pre-defined label space. This is relevant for scenarios where it is hard to define a complete list of object categories that are relevant for the task. For example, autonomous agents acting in the real world may encounter all sorts of objects, but they may not be detected if their category name is missing in the pre-defined label space. The proposed algorithm is interesting because it does not require any model training, but uses readily available vision-language and class-agnostic segmentation models. The experiments also demonstrate strong performance compared to prior works in this setting.
Strengths: - Although one might argue that the proposed approach is just a collection of existing models, the proposed combination works, does not require any training, and achieves good results.
- I think the proposed method can be a very useful tool for many in the research community and in industry. The paper can also be seen as a recipe on how to build similar pipelines for other tasks, without the need to train models.
- From Section 3 onward, I think the paper was easy to follow. The method is well explained.
- The ablation study of all individual components is great.
Weaknesses: - Experiments
- The numbers for "open-set" (or open-vocabulary) in Table 1 are not state-of-the-art. Looking at OWLv2 [A], even the models not using the LVIS dataset achieve 35.4, 39.0 and 40.9 AP-rare on LVIS-mini with different backbones.
- The numbers for non-rare classes in LVIS are not reported in Table 1. Assuming the proposed approach should be a competitive replacement for other object detectors (fixed set, open-set), then I would want to know the numbers on the base classes as well. I expect such a method that relies on attention maps for localization may underperform compared to other methods.
- One limitation of the proposed two-step pipeline can be that the initial VLM misses some object names in the caption. The model would not recover even if the segmentation model could segment those objects. Although I saw some Average Recall (AR) numbers in the experiments, I'd be interested in recall of the VLM alone in identifying all object category names that are present. That would have been an easy experiment that should be conducted on standard detection datasets like COCO and LVIS.
- Paper writing
- I had real troubles reading the abstract (and most parts of the introduction as well) and trying to understanding what the task is being solved. The exact definitions of the terms open-world, open-set and open-ended that are used in the abstract may not be clear to everyone. In fact, I think these terms are used inconsistently throughout the ML/CV literature.
- The statement that "pre-defined object categories are not available in real-world scenarios" probably needs (much) more context to be a valid argument. For instance, autonomous vehicle companies have a list of a few hundred object categories that they expect on the road - hence, it's pre-defined - while unknown objects are handled differently through generic obstacle detection.
- The name "GenerateU" is used in the abstract without a reference - that's not common knowledge yet - it's a CVPR'24 paper that was recently presented.
- More details are needed in some parts of the paper:
- The reason for collapse of the aggregated attention maps and the corresponding regularization need more details.
- Section 3.6 needs more details. To me it seems like the VLM is asked to generate prompts for itself. Is that then dependent on the input image? And what's the prompt to generate more question pairs?
References:
- [A] Scaling Open-Vocabulary Object Detection. Minderer et al. NeurIPS'23
Technical Quality: 2
Clarity: 2
Questions for Authors: - Open Set methods described in lines 104ff are also often referred to as open-vocabulary perception models. I would say more commonly they are referred to as such.
- Typo in line 153? "casual" -> "causal"? Same in line 169.
- The indexing in Eq. 3 regarding the layer index is inconsistent with the above definition.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: Limitations of the proposed method were discussed in the main paper. One potential limitation to add would be the inherent two-step approach of the proposed pipeline which cannot recover from mistakes that the initial VLM makes in the captioning output, for instance missing object names.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: The performance of OWLv2.
Thanks for reminding. We will add it to the paper.
### Q2: Results for non-rare classes in LVIS.
The following table provides results for common ($AP_c$) and frequent ($AP_f$) classes of LVIS. We can find that, though Close-set and Open-set methods achieve better results on $AP_c$ and $AP_f$, they require 1203 object names from LVIS as input. Besides, our method is training-free and achieves competitive results with GenerateU.
|Method|Type|Require Category|Training|Mask|$AP_r$|$AP_c$|$AP_f$|
|:-|-|-|-|-|-|-|-|
|Mask R-CNN|Close-set|Yes|Yes|Yes|26.3|34.0|33.9|
|Deformable DETR|Close-set|Yes|Yes|No|24.2|36.0|38.2|
|GroundingDINO|Open-set|Yes|Yes|No|18.1|23.3|32.7|
|DetCLIP|Open-set|Yes|Yes|No|26.9|33.9|36.3|
|YOLOWorld|Open-set|Yes|Yes|No|27.1|32.8|38.3|
|GenerateU|Open-ended|No|Yes|No|22.3|25.2|31.4|
|Ours|Open-ended|No|No|Yes|23.4|25.3|30.0|
### Q3: Average recall (AR) of VLM for object name generation.
The following table provides the AR of VLM for object name generation. $AR_{x}$ presents the similarity score calculated by CLIP larger than x is viewed as positive.
|Dataset|Prompt Ensemble|$AR_{0.5}$|$AR_{0.7}$|$AR_{0.9}$|
|:-|-|-|-|-|
|LVIS|No|97.3|93.2|40.4|
|LVIS|Yes|98.8|98.4|60.4|
### Q4: Cannot recover missing objects.
As shown in the table of Q3, we are currently trying to reduce the number of missing objects with the prompt ensemble. In the future, we will adopt techniques like Chain-of-thought to recover missing objects.
### Q5: Clarifying the concepts of open-world, open-set, and open-ended perception.
As the reviewer mentioned, these three terms are used inconsistently in the ML/CV literature. In our opinion, **open-world perception** is a broad concept. It tries to give precise results in dynamic and unpredictable environments, which contain novel objects and involve scene domain shifting. Open-set and open-ended perception are subtasks of open-world perception, and try to address the novel objects problem. Specifically, **open-set perception**, like the grounding task, predicts object locations when given images and novel object names. In contrast, **open-ended perception** is similar to traditional perception that predicts novel object locations and their names simultaneously when only given images.
We will add the discussion to the paper.
### Q6: Argument of "predefined object categories are not available in real-world scenarios."
The reviewer mentioned that "autonomous vehicle companies have a list of a few hundred object categories - hence, it's predefined." Though they have predefined hundreds of categories, there are still categories they may not include, such as various rare animals. Besides, some objects cannot be presented by a simple category name, such as a human in an animal costume, which may look like an animal but is actually a human.
As the reviewer said, generic obstacle detection can handle some unknown objects. However, many things do not have a significant 3D shape, like pits or grains on the ground. Thus, open-set methods cannot handle all situations.
We will add the discussion to the paper.
### Q7: Reason for attention maps collapse when aggregation.
The collapse in attention map aggregation is caused by the causal mask. For example, assuming there are uniformly distributed attention maps (3$\times$3 for simplicity) in all transformer layers (total 2 layers for simplicity):
$$
\\begin{pmatrix}
\frac{1}{3}&\frac{1}{3}&\frac{1}{3}, \\
\frac{1}{3}&\frac{1}{3}&\frac{1}{3}, \\
\frac{1}{3}&\frac{1}{3}&\frac{1}{3}
\\end{pmatrix}
$$
After using the attention rollout method with the causal mask, we can obtain the final attention map:
$$
[I+\begin{pmatrix}
\frac{1}{3} & 0 & 0, \\
\frac{1}{3} & \frac{1}{3} & 0, \\
\frac{1}{3} & \frac{1}{3} & \frac{1}{3}
\end{pmatrix}]\times[I+\begin{pmatrix}
\frac{1}{3} & 0 & 0, \\
\frac{1}{3} & \frac{1}{3} & 0, \\
\frac{1}{3} & \frac{1}{3} & \frac{1}{3}
\end{pmatrix}]=I+\begin{pmatrix}
\frac{7}{9} & 0 & 0, \\
\frac{8}{9} & \frac{7}{9} & 0, \\
1 & \frac{8}{9} & \frac{7}{9}
\end{pmatrix}
$$
We can find that, for each row, the number in the front column is greater than in the back column, *e.g.*, $\frac{8}{9}>\frac{7}{9}$ and $1>\frac{8}{9}>\frac{7}{9}$. Thus, simply adopting attention rollout with the causal mask will centralize the attention activation in the front patch, *i.e.*, the top left corner of the image in order, as shown in Figure 5 in the paper.
### Q8: Details of regularization.
As discussed in Q7, directly adopting attention rollout will make the number in the front column greater than in the back column. We introduce a simple regularization method that multiplies a small term to the front column and a bigger one to the back column. The value of the term uses a simple linear descent: $1-L_0/L$.
### Q9: Details of prompt generation.
We directly ask VLM to generate question prompts by itself without giving images. Specifically, we use the question:
```
If we want you to list all possible objects in the given image, what questions should we ask? Please give 10 questions you prefer.
```
Here is the answer from CogVLM:
```
1) Please analyze the image and list all the objects present.
2) Identify and provide a comprehensive list of objects in the image.
3) Using the image, generate a detailed inventory of all objects visible.
4) Analyze the image and extract information about any objects present.
5) Please describe all the objects you can identify in the image.
6) From the image, generate a report listing all objects detected.
7) Utilizing the image, identify and report all objects visible.
8) Analyze the image and provide a comprehensive breakdown of all objects seen.
9) Please process the image and generate a detailed list of all objects present.
10) Using the image, identify and describe all objects observed.
```
We will add the details in the paper.
### Q10: Typos and indexing inconsistency.
Thanks. We will revise them in the paper.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal. I appreciate the author's efforts in addressing all of concerns - most of which with satisfactory explanations. I currently intend to keep the rating positive. | Summary: In this paper, the authors proposed to combine vision language models and segment anything model (SAM) for open-ended object detection. Attention maps generated from vision language models were used to prompt SAM. Experiments on multiple benchmark datasets demonstrated better performance over several baseline methods.
Strengths: 1. Combining vision language model and SAM is very interesting and could lead to lots of useful applications. In this paper, the authors proposed an efficient and effective approach to connect them.
2. The proposed attention map generation from vision language models and prompt generation for SAM make sense and are technically sound to me.
3. Extensive experiments and ablation studies were conducted to validated the effectiveness of the proposed approach.
Weaknesses: 1. Even though the authors claimed that the proposed approach is open-ended, the evaluation is still on predefined object category names and the performance is worse than some open-set baseline methods.
2. The performance of the proposed approach heavily depends on the vision language models since the attention map is the key component. Thus, how to choose a good vision language model is essential for the proposed framework. Even though the authors did ablation study of model generation in Table 5, I would expect more discussion on how to choose the best vision language model. Are there any principals or characteristics required?
3. Several typos: "visiothe n" in line 44 and "Generobtainsnd" in line 71.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to the weakness section to discuss more on the evaluation and vision language model selection.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors discussed the limitations in the draft and I have no further suggestions for improvement.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: Evaluation is still on predefined object category names.
The inference of VL-SAM **does not** rely on predefined object category names; we only use them to calculate the evaluation metrics.
Specifically, as Lines 211-213 mentioned, we use VL-SAM to generate object categories by itself. However, the generated object categories from VL-SAM may not align with the category names specified in the LVIS dataset. For example, VL-SAM may generate "kids" or "adults," while the LVIS dataset provides a "person" label for these objects. To address this, we follow GenerateU to use the CLIP text encoder to map generated object categories from VL-SAM to the predefined class names in LVIS for mAP evaluation by calculating their similarities.
### Q2: Performance is worse than open-set methods.
Our method addresses the **open-ended** detection and segmentation problem, which is different from the **open-set** problem. As mentioned in Lines 34-39, **open-set** methods require predefined object categories as inputs during inference. In contrast, **open-ended** methods do not need to input predefined categories and can predict the object categories themselves. Therefore, it is unfair to compare the performance of our method with open-set methods directly. They are two different tasks.
Moreover, our method is training-free, while open-set methods listed in the paper need additional training.
### Q3: Vision language model selection.
As shown in Table 5 of our paper, empirical results demonstrate that VLMs with more powerful multi-modal chat and reasoning capabilities perform better when integrated into VL-SAM. Thus, we suggest using a stronger VLM.
Besides, our framework is general. Any VLMs that generate object names from images and provide attention maps can be incorporated.
### Q4: typos.
Thanks. We will revise them accordingly.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. Since some of the concerns are addressed, I will raise my rating to borderline accept. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their constructive comments. They recognize that our work can "lead to lots of useful applications" (71UW, 6VtQ, bgFd), "achieves good results" (71UW, 6VtQ), and is "well explained" (71UW, 6VtQ) and "effective" (71UW, bgFd). We address the reviewers' concerns in the rebuttal text below. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Optimus-1: Hybrid Multimodal Memory Empowered Agents Excel in Long-Horizon Tasks | Accept (poster) | Summary: This paper introduces Optimus-1, a retrieval augmented generation method that enables minecraft agents to excel in long-horizon tasks. The proposed method is based on a Hybrid Multimodal Memory Module that consists of an Abstracted Multimodal Experience Pool and a Hierarchical Directed Knowledge Graph. Those two key components are mainly proposed to solve the challenge of insufficient structured knowledge and lack of multimodal experience. Empirical evaluations show that such a memory module can effectively improve the performance of a minecraft agent in long-horizon planning tasks. Ablative experiments are conducted to validate the effective of each component.
Strengths: This paper addresses an important problem that transforms LLM into autonomous decision making agents.
The proposed method achieves a significant improvement over prior state-of-the-art in Minecraft, particularly exceling at long-horizon tasks such as getting diamonds.
Weaknesses: The writing of this paper can be improved. In particular, I would suggest a major rewriting of the method section that 1) follows a top-down organization to first talk about high-level ideas of the method before going to the details such as "1 frame per second" and "a window size of 16", and 2) reduces unnecessary complications of the terminology including Hybrid Multimodal Memory Module, Hierarchical Directed Knowledge Graph, etc.
Major claims in the introduction are not supported by empirical evidence. It is unclear whether GPT4V etc does not have sufficient structured knowledge and related works in multi-modal agents do employ multi-modal experiences [1].
It seems that the use of Hierarchical Directed Knowledge Graph is limited to Minecraft where there is a strict Directed Graph relation between different objects. It is unclear whether this method can be helpful in the general setting.
The performances are evaluated on a set of custom benchmarks, and it will be good to have results on prior benchmarks reported by the baseline methods such as Voyager and Jarvis.
Minor:
Line 25 "Early research [1, 6, 17] developed simple agents by constructing policy networks." seems unclear what those policy networks are.
Line 30, what long horizon tasks are the authors talking about?
Line 36 grammar error "Insufficient of Structured Knowledge", what is empirical evidence for existing agents do not have structured knowledge?
Line 70 and line 71, two 30% seems repetitive.
Line 108, where is the subgoal coming from?
Line 115-116, what are those reflection phases?
In line 206, why did the authors contruct a new benchmark instead of following the prior benchmarks on minecraft?
[1] Zhang, C., Yang, Z., Liu, J., Han, Y., Chen, X., Huang, Z., … Yu, G. (2023). AppAgent: Multimodal Agents as Smartphone Users.
[2] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291, 2023
[3] Zihao Wang, Shaofei Cai, Anji Liu, Yonggang Jin, Jinbing Hou, Bowei Zhang, Haowei Lin, Zhaofeng He, Zilong Zheng, Yaodong Yang, et al. Jarvis-1: Open-world multi-task agents with memory-augmented multimodal language models. arXiv preprint arXiv:2311.05997, 2023.
Technical Quality: 1
Clarity: 1
Questions for Authors: See weakness
Confidence: 4
Soundness: 1
Presentation: 1
Contribution: 2
Limitations: Limitations are not adequately discussed in the paper. One potential limitation is that the performance of RAG methods may be capped by the performance of the base model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: The writing of this paper can be improved.
A1: Thank you for your valuable suggestions. In the method section, we first introduce the core contribution of this paper, Hybrid Multimodal Memory, including its motivation, innovations, and components. Based on it, we developed a novel agent framework, Optimus-1. We follow a top-down organization to introduce Optimus-1’s components. Following your suggestions, we will improve the organization and logic of the paper to minimize potential misunderstandings for readers.
> Q2: It is unclear whether GPT4V etc does not have sufficient structured knowledge and related works in multi-modal agents do employ multi-modal experiences [1].
A2: (1) **Table 2 in the manuscript** shows that in the Minecraft environment, the removal of structured knowledge from Optimus-1 (based on GPT-4V) leads to a significant decrease in success rates across all task groups. It proves that GPT4V lacks sufficient knowledge of the Minecraft environment.
(2) App Agent does not employ multimodal experiences. During the stage of free exploration, it records the effects of actions applied to different UI elements in the text modality. In contrast, Optimus-1 not only records the tasks, environmental information, agent initial state, and plan in the text modality, but also the abstract visual information in the image modality. Furthermore, It dynamically summarizes long-sequence multimodal information, significantly reducing memory and retrieval costs.
> Q3: It is unclear whether this method can be helpful in the general setting.
A3: In the future, we will extend the Hybrid Multimodal Memory to other domains. However, we believe that the current environment and experiments sufficiently demonstrate the contribution and effectiveness of our work. Please refer to the response to reviewer 3HCc, A3.
> Q4: The performances are evaluated on a set of custom benchmarks, and it will be good to have results on prior benchmarks reported by the baseline methods such as Voyager and Jarvis.
A4: (1) Our benchmark is extensive and comprehensive, involving the most common long-horizon tasks in Minecraft. Moreover, we add the average steps (AS) and average time (AT) of completing the task as evaluation metrics, to better evaluate the efficiency of the agent. Furthermore, we have constructed a baseline for human evaluation, which represents a major contribution compared to the previous benchmark.
(2) We have evaluated the performance of Optimus-1 on prior benchmarks reported by DEPS [1], Voyager [2], MP5 [3] (**Table 15, Table 16, Figure 8 in the Appendix**). Extensive experimental results demonstrate that Optimus-1 outperforms all baselines.
> Q5: Minor: Line 25 "Early research [1, 6, 17] developed simple agents by constructing policy networks." seems unclear what those policy networks are.
A5: Policy networks in line 25 refer to models based on the Transformer architecture, trained through reinforcement learning/imitation learning. We will revise the wording to avoid potential misunderstandings by readers.
> Q6: Line 30, what long horizon tasks are the authors talking about?
A6: In Minecraft, long-horizon tasks refer to complex tasks that require the agent to continually interact with a complex environment to complete a long sequence of sub-goals. **We provide a detailed explanation and examples in Appendix C.3**.
> Q7: Line 36 grammar error "Insufficient of Structured Knowledge"
A7: We will revise it to 'Insufficient Exploration of Structured Knowledge.
> Q8: Line 70 and line 71, two 30% seems repetitive.
A8: We will remove “Optimus-1 closes the gap between agents and human player performance by 30%”.
> Q9: Line 108, where is the subgoal coming from?
A9: As stated in line 152-153, sub-goals are plans generated by the Knowledge-Guided Planner. We will revise the description of sub-goals in line 108 to avoid misunderstanding by readers.
> Q10: Line 115-116, what are those reflection phases?
A10: As stated in line 173, reflection results from Experience-Driven Reflector are categorized as COMPLETE, CONTINUE, and REPLAN. We will revise the description of reflection phases in line 115 to avoid misunderstanding by readers.
> Q11: Limitations are not adequately discussed in the paper. One potential limitation is that the performance of RAG methods may be capped by the performance of the base model.
A11: **We discuss the limitations in Appendix B**. **Figure 5 in the manuscript** indicates that the proposed hybrid multimodal memory (using RAG technology) is adaptable to GPT-4V and open-source multimodal large language models (MLLMs). Various MLLM-based versions of Optimus-1 have demonstrated performance improvements ranging from 2 to 6 times.
[1] Wang et al. Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents. 2023.
[2] Wang et al. Voyager: An open-ended embodied agent with large language models. 2023.
[3] Qin et al. MP5: A Multi-modal Open-ended Embodied System in Minecraft via Active Perception. 2024.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thank the authors for the rebuttal. Thanks for pointing me to the comparisons in prior benchmarks. The rebuttal has clarified some of my concerns and I will adjust my score accordingly. However, there are still two important concerns remaining:
### The writing of the paper can be improved.
As discussed above, the writing can be significantly improved. Currently the description of the technical section is very convoluted and can be better organized. Some of the terminologies are overly complicated, e.g. Hybrid Multimodal Memory, what's its difference with just Multimodal Memory? Abstracted Multimodal Experience Pool can be simplified to Multimodal Experience Pool. IMHO, such complication in terminology only makes it harder for readers to understand.
Furthermore, it would be great to ensure the major claims in the paper are scientific as supported by the experiments. In particular, the experiments only show that it is important for the agent to know the rules of Minecraft (e.g. how are materials synthesized) and they are not enough to show GPT4V does not have structured knowledge.
A significant effort is required for the work to be able to be published at top-tier ML conference.
### The application beyond minecraft remains unclear.
In particular, a potential reason for HDKG to work well in minecraft is that there is a very clear game rule in minecraft (e.g. to make a stone axe we need stone and wooden sticks etc), and it is very unclear to me whether such designs can generalize to more realistic settings like real-world web navigation agents or robotics. I did read the response to reviewer 3HCc but I am not convinced that a better method for minecraft is interesting itself, unless it can be easily generalized to more realistic settings.
---
Reply to Comment 1.1.1:
Title: Responses to Reviewer wzp9 (1)
Comment: Thank you very much for taking the time to discuss with us despite your busy schedule. Regarding your concerns, we respond as follows:
> Q1: Currently the description of the technical section is very convoluted and can be better organized.
A1: We appreciate your valuable and constructive feedback, which will be pivotal in enhancing the quality of our work. To present the methodology with clear logic, we first introduce the proposed hybrid multimodal memory module, followed by a detailed description of the Optimus-1 architecture. The hybrid multimodal memory is coupled with the operation mechanism of Optimus-1, while the current version lacks detailed descriptions of sub-goals, the reflection module, etc. We will add these descriptions in Section 2.1.1 to ensure logical coherence and avoid reader confusion. Additionally, in each subsection, we will organize the content in a top-down manner, following the sequence of motivation, high-level idea of the method, and implementation details. We will revise the manuscript based on your suggestion. We also note that the reviewers found the manuscript well-written and easy to follow (R#397y, R#AfEc), so these revisions will not affect the main contribution of our work.
> Q2: Some of the terminologies can be simplified.
(1) **These terminologies reflect the characteristics and innovations of proposed methods**. For example, the Hierarchical Directed Knowledge Graph links knowledge at different levels (wooden, stone, diamond) through **Directed** graphs, forming a **Hierarchical** knowledge graph, which differs from previous knowledge graphs. The Abstracted Multimodal Experience Pool **Abstracts** long-sequence multimodal historical information into multimodal experiences, whereas existing multimodal memory mechanisms do not summarize multimodal information.
(2) **We can use abbreviations in the manuscript to describe these terminologies**. Although we introduced the Hierarchical Directed Knowledge Graph (HDKG) and the Abstracted Multimodal Experience Pool (AMEP) in Section 1 of the manuscript, we have frequently used the full names in subsequent sections out of concern that readers might forget or misunderstand the meanings of HDKG and AMEP. We will consider using abbreviations in the manuscript to describe these terminologies to facilitate easier reading for the readers.
> Q3: Difference between Hybrid Multimodal Memory and just Multimodal Memory.
A3: **Hybrid Multimodal Memory is different from existing multimodal memory**. As stated in the Introduction section of the manuscript, the hybrid multimodal memory module consists of structured knowledge (graphs) and multimodal experiences (text, image sequences). It stores multiple heterogeneous contents in a **mixed manner** and dynamically summarizes long-sequence multimodal information.
In contrast, existing agents only store text and images as multimodal memory and do not summarize them. For example, Jarvis-1 [1] stores text and image sequences without summarizing multimodal information.
Moreover, the comparison of memory mechanisms in existing Minecraft agents is shown in **Table 7 of the manuscript**.
> Q4: Ensure the major claims in the paper are scientific as supported by the experiments.
A4: Thank you for your suggestions. We will revise the claim in the Introduction section in the manuscript: “Existing Multimodal Large Language Models such as GPT-4V lack sufficient knowledge in Minecraft”, and results of **Table 2 in the manuscript** support this claim.
**The rest of the responses are in the next comment**.
---
Rebuttal 2:
Title: Responses to Reviewer wzp9 (2)
Comment: This comment connects to the Responses to Reviewer wzp9 (1)
> Q5: It is unclear whether such designs can generalize to more realistic settings like real-world web navigation agents or robotics.
A5: (1) To address your concerns about the generalisation of our method in real-world scenarios, we applied Optimus-1 to the app agent scenario. We followed the environment and settings of AppAgent [2] and conducted comparative experiments on its benchmark (9 apps with a total of 45 tasks). The experimental results in the table below show that Optimus-1 outperforms AppAgent and GPT-4 baselines. This reveals that Optimus-1 can generalize to more realistic settings, such as real-world app navigation agents.
Tab 1: Experiments on the benchmark of AppAgent. We report the success rate of the agent in completing 45 tasks.
| Method | Success Rate |
| --- | --- |
| GPT-4 | 48.9% |
| AppAgent | 73.3% |
| Optimus-1 | **86.7%** |
(2) Proposed Hybrid Multimodal Memory is a general architecture. As mentioned in lines 31 to 35 in the manuscript, we were inspired by the theory that 'humans benefit from knowledge and experience when performing long-sequence tasks' , and propose a novel Hybrid Multimodal Memory structure that incorporates both knowledge and experience into the memory mechanism. We argue that it is general and can be adapted according to different environments. In the experiments mentioned above in the app scenario, the key step is obtaining the logical relationships between buttons or actions and converting them into a knowledge graph. Once these logical relationships are established, HDKG can easily be adapted to the app environment. As for AMEP, it can be simplified to store the task prompt, images, and actions for each atomic operation.
We hope that the results of these experiments can address your concerns, and we would greatly appreciate it if you could consider giving us a higher rating. If you have any further questions, please feel free to contact us.
[1] Wang et al. Jarvis-1: Open-world multi-task agents with memory-augmented multimodal language models. 2023.
[2] Zhang et al. AppAgent: Multimodal Agents as Smartphone Users. 2023.
---
Rebuttal Comment 2.1:
Comment: Thank the authors for the additional response on the clarity of the paper and results on real-world app navigation. I am more convinced of the proposed method being general and increase the score. I hope that those new results and rewriting can be reflected in a future version of the paper.
---
Reply to Comment 2.1.1:
Title: Responses to Reviewer wzp9
Comment: We are pleased to have addressed your concerns. We will incorporate your valuable suggestions in future revisions and add more experiments in the Appendix. Additionally, we have built an official repository to provide well-structured open-source codes and a project page (to be released upon acceptance). | Summary: This paper presents Optimus-1, a multimodal agent that focuses on Minecraft tasks. Specifically, Optimus-1 is equipped with a Hybrid Multimodal Memory including: 1) a Hierarchical Directed Knowledge Graph that stores the world knowledge through free exploration and teacher guidance; 2) an Abstracted Multimodal Experience Pool that enables Optimus-1 to reason about the current situation by using past experience. Based on the multimodal memory, Optimus-1 adopts a Knowledge-Guided Planner and an Experience-Driven Reflector to generate better plan and reflect periodically in long-horizion tasks. Experiments results illustrates the effectiveness of the multimodal memory of Optimus-1 in long-horizion tasks.
Strengths: 1. The paper introduces a multi-modal memory mechanism that includes a hierarchical world knowledge graph and a multi-modal past experience pool. The memory is later utilized by the multi-modal planner and reflector module of the Optimus-1.
2. Good experiment results illustrate the effectiveness of Optimus-1 compared to other strong baselines. Sufficent ablation studies of the effectiveness of the modules in Optimus-1 as well as the necessity of both success and failure cases for reflection.
3. The paper is well written.
Weaknesses: 1. The details of the construction procedure of AMEP are unclear (e.g. How to maintain the image buffer by computing image similarity? What is the threshold used in MineCLIP? )
2. How Optimus-1 acquires world knowledge through free-exploration is not specified.
3. There are other minecraft agents (Jarvis-1, etc.) using multi-modal memory. The paper claims the efficiency of the memory storage and retrieval compared to Jarvis-1, but this is not quantitatively evaluated.
4. It seems that the low-level action controller will not be updated through reflection, which limits the effectiveness of the reflection pipeline.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In the construction of AMEP, given the video stream, how to adaptively update the abstracted frames in the buffer? How is the threshold of MineCLIP determined?
2. In the free-exploration phase, what is the efficiency of Optimus-1 to learn an entry of world knowledge such as "a stone sword can be crafted with a wooden stick and two cobblestones" through random exploration?
3. In the reflection phase, how to retrieve past experience from AMEP? Is the retrieval based on the image similartiy, task goal, or a combination of both? How does the fail case contribute to the final success of the task?
4. In Table 1, how is GPT-4V evaluated? Does GPT-4V have the same world knowledge as Optimus-1?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: In the construction of AMEP, given the video stream, how to adaptively update the abstracted frames in the buffer? How is the threshold of MineCLIP determined?
A1: (1) **As described in Section 2.1.1**, for video streams, we filter video streams at 1-second intervals and store them in a variable-length video buffer. These filtered images sequentially enter an image buffer with a window size of 16. When the image buffer reaches its capacity and a new image is added, we calculate the cosine similarity between frames and remove one frame from the pair with the highest similarity. Through this process, we can dynamically preserve abstracted frames in the image buffer.
(2) We empirically set the threshold for MineCLIP at 0.7.
> Q2: How Optimus-1 acquires world knowledge through free-exploration? what is the efficiency of Optimus-1 to learn an entry of world knowledge such as "a stone sword can be crafted with a wooden stick and two cobblestones" through random exploration?
A2: (1) During the free exploration phase, Optimus-1 will randomly initialize the environment, materials, and tasks. It will freely explore basic tasks, such as chop down trees, mine stones with pickaxes, craft stone swords, etc. When the environment feedback indicates that a task is completed, the corresponding relationship (e.g., {1 wooden stick, 2 planks} → {1 wooden sword}) is updated into the HDKG.
(2) We use task decomposition and parallelized methods to enable Optimus-1 to learn world knowledge efficiently. Given the initial materials, Optimus-1 only needs to perform one sub-goal (chop a tree, mine iron ore, craft a stone sword by a wooden stick and two cobblestones, etc.) each time, which enables Optimus-1 to quickly complete the task and then learn knowledge. Furthermore, we initialize multiple instances of Optimus-1, which share the same HDKG and AMEP. This allows Optimus-1 to efficiently learn such knowledge.
> Q3: There are other minecraft agents (Jarvis-1, etc.) using multi-modal memory. The paper claims the efficiency of the memory storage and retrieval compared to Jarvis-1, but this is not quantitatively evaluated.
A3: Jarvis-1 [1] stores all images without summarization, while our approach summarizes each sub-goal and only retains 16 images per sub-goal, significantly improving storage efficiency. Take the example of "craft a wooden pickaxe" shown in its demo website, Jarvis-1 executes 1,139 steps, storing 1,139 images. In contrast, we only store 5 sub-goals × 16 images = 80 images, resulting in a 14x increase in storage efficiency. Under a smaller memory storage, the retrieval efficiency will naturally be higher.
> Q4: It seems that the low-level action controller will not be updated through reflection, which limits the effectiveness of the reflection pipeline.
A4: The action controller does not affect the effectiveness of the reflection pipeline. The purpose of reflection is to correct the planner. The reflector evaluates the current situation based on multimodal experience to determine whether the planner needs to replan. If needed, the planner is requested to generate a new plan, which is then executed by the action controller. If not, the action controller continues executing the current sub-goal. Therefore, the action controller does not affect the effectiveness of the reflection pipeline.
> Q5: In the reflection phase, how to retrieve past experience from AMEP? Is the retrieval based on the image similartiy, task goal, or a combination of both? How does the fail case contribute to the final success of the task?
A5: (1) During the reflection phase, we use text matching method to retrieve task goal to obtain success and failure cases. When there are multiple similar cases, we select the one with the highest image similarity.
(2) Failure cases serve as in-context examples for the Reflector, assisting in evaluating whether the current task goal can be achieved under the present situation. Due to the complexity and diversity of the environment, it’s challenging to determine the success of the current task based solely on success cases. The inclusion of failure cases allows the agent to assess the current state through a diverse comparison. **Table 3 in the manuscript** reveals that incorporating both success and failure cases into in-context learning significantly enhances the performance on long-horizon tasks.
> Q6: In Table 1, how is GPT-4V evaluated? Does GPT-4V have the same world knowledge as Optimus-1?
A6: (1) **In Table 1 in the manuscript**, GPT-4V is evaluated without integrating hybrid multimodal memory modules. During the planning phase, GPT-4V generates a plan for the action controller based on observation and task. During the reflection phase, it generates reflection results in a zero-shot manner.
(2) Optimus-1 is built upon the GPT4V foundation with the Hybrid Multimodal Memory. **Table 2 in the manuscript** shows that the removal of structured knowledge from Optimus-1 (based on GPT-4V) leads to a significant decrease in success rates across all task groups. It proves that GPT4V lacks sufficient knowledge of the Minecraft environment.
(3) We will add a detailed description of the baseline settings in the Appendix.
[1] Wang et al. Jarvis-1: Open-world multi-task agents with memory-augmented multimodal language models. 2023.
---
Rebuttal Comment 1.1:
Title: Thank the authors for the rebuttal
Comment: I have read the rebuttal and other reviews. Most of my concerns have been solved, so I will maintain my original score as 6: Weak Accept. | Summary: The paper proposes "Optimus-1" a Mulit-Modal LLM based agent. They evaluate their agent extensively on MineCraft and demonstrate superior performance compared to previous work.
The key components of the approach are:
1. A memory module consisting of structured memories (DAG) and multi-modal experience replay (with negative and positive examples)
2. A planing module
3. A reflection module
4. A "low-level" execution module
The novelty of the work consists of:
1. The DAG memory
2. the positive & negative samples in the replay memory
3. Putting everything together in this way
Multiple ablation studies are made.
Strengths: Strengths:
1. A lot of work went into running these experiments and a lot of results are presented (incl. open-source MLLMs)
2. A lot of work went into designing the evaluation benchmark
3. A novel agent that performs better than previous work (and seems that can be applied to more tasks).
Weaknesses: Weakness:
1. "teacher guidance learning" for HDKG -> It seems that teacher demonstrations are needed to make the agent perform the hard tasks.
a. The paper (even incl. appendix) does not explain where these demonstrations come from and what the impact of these is. (Or how costly these demonstrations are to obtain).
b. Looking at ablation studies one can see that this Knowledge Module is key and performance drops by 20% (however, it is unclear how much the "teacher" contribution is. It seems that without the teacher contribution the model might perform quite poorly compared to baselines.) Therefore this indicates that expert human demonstrations are needed to actually make the agent work, as opposed to the "LLM" doing the reasoning and work.
c. Similarly, all further evaluations (incl. against other benchmarks in the appendix) are therefore somewhat questionable.
2. The actual learning mechanism of the DAG memory (HDKG) is not described in sufficient detail to replicate the work. It would be good to cite more references or explain this part in more detail.
3. Positive and Negative examples in AMEP. While the paper argues this is an important contribution, it seems that it is nowhere evaluated if only positive examples would be more succesful.
4. Computational costs are only marginally mentioned ($5000 for OpenAI, and only 4xA100, but no amount of hours). It is not clear how much the "exploration" / learning phase is (time and money), what about the teacher phase?
5. Evaluation outside of Minecraft would be interesting as well.
Technical Quality: 3
Clarity: 3
Questions for Authors: Questions:
1. How are teacher demonstrations obtained? How costly is this? How many human annotators and hours are needed for this?
2. How does the agent perform without teacher demonstrations?
3. What is the performance without negative samples in AMEP?
4. How long does the training / exploration phase take? How costly is it? (How many GPU hours on 4x A100 did it take?) How much is an evaluation?
5. How long would your method take to setup in a new environment altogether (and what would be the rough steps)? What is another good environment?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations:
1. The authors speak about the limitations of one of the modules (the action generator), however, the teacher phase limitation mentioned above is not discussed. It seems that the method strongly depends on expert demonstrations and these are expensive to obtain in new environments outside of Minecraft.
2. Evaluations on other envs than Minecraft would have been interesting to discuss.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: How are teacher demonstrations obtained? How costly is this? How many human annotators and hours are needed for this?
A1: (1) We obtain teacher demonstrations from Minecraft Wiki. For each task group, we randomly select 5 tasks that are not included in the benchmark. We then create corresponding task plans based on content (synthesis relationship between objects) from the Minecraft Wiki. Taking the task “craft a wooden sword” as an example, we obtain the crafting relationships from the Minecraft Wiki: {1 wooden stick, 2 planks, 1 crafting table → 1 wooden sword}, {1 log → 4 planks}, {2 planks → 4 sticks}, {4 planks → 1 crafting table}. These relationships are transformed into the plan: {1. Get two logs 2. Craft eight planks 3. Craft a crafting table 4. Craft two sticks 5. Craft a wooden sword}. The plan serves as teacher demonstrations for action controller to perform the task.
(2) This method is very low-cost. This process does not require additional human annotators, and it takes about 2 hours to obtain all teacher demonstrations.
(3) We will include these details in the Appendix.
> Q2: How does the agent perform without teacher demonstrations? It is unclear how much the teacher demonstrations contribute to HDKG. Teacher demonstrations are needed to actually make the agent work, as opposed to the "LLM" doing the reasoning and work.
A2: (1) **Table 2 in the manuscript shows that without teacher demonstrations, the performance of Optimus-1 decreases** (e.g., 9.5% -> 1.8% on Diamond Group)**.**
(2) **The construction of the HDKG is indispensable without teacher demonstrations**. For example, in the free exploration phase, Optimus-1 learns the methods for crafting/mining basic materials (e.g., craft sticks and mine diamonds). Without demonstrations (plan), it cannot learn the synthesis methods for advanced items (e.g., a diamond sword is obtained by a stick and two diamonds). This limits Optimus-1's ability to complete challenging long-horizon tasks (such as craft a diamond sword).
(3) **Obtaining teacher demonstrations is efficient and low-cost.** Additionally, compared to parameter-update learning, our non-parametric learning method needs only a few demonstrations (plans) as learning data. This allows for the rapid expansion of the HDKG, enabling better reasoning and planning for MLLMs.
> Q3: The actual learning mechanism of the DAG memory (HDKG) is not described in sufficient detail to replicate the work.
A3: (1) Due to the diversity and complexity of knowledge in Minecraft, we propose "free exploration-teacher guidance" approach instead of manually constructing a knowledge graph.
(2) During the free exploration phase, Optimus-1 will randomly initialize the environment, materials, and tasks. It will freely explore basic tasks, such as chop down trees, mine stones with pickaxes, craft stone swords, etc. When the environment feedback indicates that a task is completed, the corresponding relationship (e.g., {1 wooden stick, 2 planks} → {1 wooden sword}) is updated into the HDKG.
(3) During the teacher guidance phase, after Optimus-1 completes a long-horizon task, advanced synthesis relationship (e.g., {1 wooden sticks, 2 diamonds} → {1 diamond sword}) is updated in the HDKG.
(4) We will add a detailed description of “free exploration-teacher guidance" approach in the Appendix.
> Q4: What is the performance without negative samples in AMEP?
A4: **Table 3 in the manuscript** shows the ablation study for AMEP. It demonstrates that removing negative samples results in a decrease on the success rate (e.g. 94% -> 84% on Stone Group). This reveals that incorporating both success and failure cases into in-context learning significantly enhances the performance of the agent.
> Q5: How long does the training / exploration phase take? How costly is it? (How many GPU hours on 4x A100 did it take?) How much is an evaluation?
A5: (1) In the free exploration and teacher guidance phases, there is no need to access OpenAI's API, which keeps costs low. We instantiate multiple Optimus-1 in parallel, sharing the same memory, and the learning process takes approximately 16 hours on 4x A100 80G GPUs.
(2) Evaluating Optimus-1 on benchmark costs approximately $900. We parallelize the evaluation, which takes about 20 hours on 4x A100 80G GPUs. While figure 5 in the manuscript demonstrates that with Hybrid Multimodal Memory, open-source MLLMs’ performance near the GPT-4V’s. This reveals that we can achieve excellent performance with open-source MLLMs at a very low cost.
> Q6: How long would your method take to setup in a new environment altogether? What is another good environment?
A6: **Our method is highly adaptable to various environments**. Take the app agent as an example, the key step involves transforming the knowledge structure from Minecraft's object synthesis relationships into logical relationships between buttons or operations. Once these logical relationships are established, HDKG can easily be adapted to the app environment. As for AMEP, it can be simplified to store the task prompt, image, and action for each atomic operation. Adapting our method to other domains remains as future work.
> Q7: The teacher phase limitation mentioned above is not discussed. It seems that the method strongly depends on expert demonstrations and these are expensive to obtain in new environments outside of Minecraft.
A7: As stated in A1 and A2, teacher demonstrations (plans) are easy to collect, efficient, and low-cost. In a new environment, such as an app agent, it is only necessary to collect plans of atomic operations.
> Q8: Evaluations on other envs than Minecraft would have been interesting to discuss.
A8: In the future, we will extend the Hybrid Multimodal Memory to other domains. However, we believe that the current environment and experiments sufficiently demonstrate the contribution and effectiveness of our work. Please refer to the response to reviewer 3HCc, A3.
---
Rebuttal Comment 1.1:
Comment: Thank you for taking the time and effort to address all questions. Things are clearer at this stage. There are a few follow-up questions.
A1:
> (1) We obtain teacher demonstrations from Minecraft Wiki. For each task group, we randomly select 5 tasks that are not included in the benchmark. We then create corresponding task plans based on content (synthesis relationship between objects) from the Minecraft Wiki
a.) Could you please describe one simple example of how such a plan is constructed. Is an LLM used for this? Are these constructed by a human?
A3:
> (2) During the free exploration phase, Optimus-1 will randomly initialize the environment, materials, and tasks.
a. )Does this happen after the initial expert demonstrations update the HDKG?
b.) How does it happen without any initial demonstrations - what is the prompt, or few-shot example used? How is it constructed (human annotated)?
---
Additional questions (AQ):
a.) Do you mean by updating the HDKG, that a fact (i.e. triplet) is added to an actual KG?
b.) Could you compare your method in more detail to works such as Voyager, specifically in terms of manual effort needed for the various phases and comparison of results with human effort vs. no human effort (especially on the harder long-range tasks).
---
Reply to Comment 1.1.1:
Title: Responses to Reviewer mEzT (1)
Comment: Thank you very much for taking the time to discuss with us despite your busy schedule. Regarding your concerns, we respond as follows:
> Q1: Example of how a plan is constructed during the teacher guidance phase. Is an LLM used for this? Are these constructed by a human?
A1: **We obtain the plans required for the teacher guidance phase through an automated process**. For each task group, we randomly select 5 tasks that are not included in the benchmark. Taking the task “craft a wooden sword” as an example, We use a script to automatically obtain the crafting relationships for a wooden sword from the Minecraft Wiki: {1 wooden stick, 2 planks, 1 crafting table → 1 wooden sword}, {1 log → 4 planks}, {2 planks → 4 sticks}, {4 planks → 1 crafting table}. These relationships can be represented as a directed acyclic graph. Then, by performing a topological sort, the graph can be converted into tuples of materials and their quantities: (wooden sword, 1), (crafting table, 1), (wooden stick, 1) (planks, 8), (log, 2). Finally, we prompt GPT-4 to construct a plan in order from basic materials to advanced materials: {1. Get two logs 2. Craft eight planks 3. Craft a crafting table 4. Craft a wooden stick 5. Craft a wooden sword}. The entire process is automated, simple, and easy to implement. We only need to construct 5 (tasks/group) x 7 (groups) = 35 plans in the teacher guidance phase, which takes approximately two hours.
> Q2: Does the free exploration phase happen after the initial expert demonstrations update the HDKG? How does it happen without any initial demonstrations - what is the prompt, or few-shot example used? How is it constructed (human annotated)?
A2: As described in **Section 2.3 in the manuscript**, we initialize the Hybrid Multimodal Memory as **empty and begin with free exploration to acquire basic knowledge**, such as crafting sticks and mining diamonds. We then proceed to **teacher guidance phase to learn advanced knowledge**, e.g., a diamond sword is obtained by a stick and two diamonds. The entire process does not require additional prompts, few-shot examples, or manual annotations.
**In the free exploration phase**, we randomly initialize the environment, materials, and tasks. For the task “craft a wooden pickaxe”, we provide initial materials (three planks, two sticks), and then Optimus-1 (only the action controller activated) attempts to complete the task. If the environment feedback indicates the task is successful, the knowledge {3 planks, 2 sticks → wooden pickaxe} is added to the HDKG. Note that we randomly initialize materials and their quantities, which means that the task may not always succeed. As a result, each free exploration may not acquire the corresponding knowledge, but it can record the relevant experience (whether successful or fail). In the free exploration phase, **Optimus-1 learns simple atomic operations, such as crafting sticks in the Wooden Group and mining diamonds in the Diamond Group.** This phase is insufficient for Optimus-1 to learn advanced knowledge, such as crafting a diamond sword.
**In the teacher guidance phase**, Optimus-1 executes each sub-goal sequentially according to the given plan. Once the task is completed, the materials and their corresponding relationships (e.g., {1 wooden stick, 2 diamonds} → {1 diamond sword}) are updated in the HDKG, and the multimodal experience of each sub-goal is stored in the AMEP. **Teacher guidance phase allows Optimus-1 to acquire advanced knowledge and learn multimodal experiences through complete long-horizon tasks.**
> Q3: Do you mean by updating the HDKG, that a fact (i.e. triplet) is added to an actual KG?
A3: **By updating the HDKG, triplets are added to an actual KG**. For example, the current KG contains (2 planks, 'craft', 4 stick), (wooden pickaxe, 'mine', stone). When new knowledge is acquired: (2 stick, 'craft', 1 stone pickaxe), (3 stone, 'craft', 1 stone pickaxe), (1 crafting table,'needed', stone pickaxe). These 3 triples added to current KG, then updated as: (1 planks, 'craft', 4 stick), (wooden pickaxe, 'mine', stone), (2 stick, 'craft', 1 stone pickaxe), (3 stone, 'craft', 1 stone pickaxe), (1 crafting table,'craft', stone pickaxe).
**The rest of the responses are in the next comment**.
---
Reply to Comment 1.1.2:
Title: Responses to Reviewer mEzT (2)
Comment: This comment connects to the **Responses to Reviewer mEzT (1)**
> Q4: Could you compare your method in more detail to works such as Voyager, specifically in terms of manual effort needed for the various phases and comparison of results with human effort vs. no human effort (especially on the harder long-range tasks).
A4: (1) **Neither Voyager nor our Optimus-1 requires human annotation**. As stated in A1, we obtain the plans required for the teacher guidance phase through an automated process. This automated process is simple and easy to implement, and it takes approximately two hours.
(2) As mentioned above, we do not require human effort to obtain the plans. So we cannot provide a comparison of results with human effort versus no human effort.
(3) More detailed comparison between Optimus-1 and Voyager: Firstly, Voyager executes sub-goals in the Mineflayer environment by calling APIs (in the form of code), while Optimus-1 uses an action controller to generate low-level actions like a human, which is more challenging. Secondly, Voyager acquires knowledge through environment feedback during both learning and reasoning processes. For instance, when it fails to execute the task 'craft a diamond sword,' it acquires knowledge such as 'two more diamonds are needed' from the environment feedback. In contrast, Optimus-1 learns whether a task is successfully executed through environment feedback only during the learning phase and requires only a small number of plans to complete the learning process.
Tanks for your discussions with us. If you have any further questions, please feel free to contact us.
---
Reply to Comment 1.1.3:
Title: Responses to Reviewer mEzT (3)
Comment: ### Additional experimental results
As you and some reviewers mentioned the feasibility of adapting Optimus-1 to other domains, we added additional experiments to demonstrate the generalisation of our method. We applied Optimus-1 to the app agent scenario. We followed the environment and settings of AppAgent [1] and conducted comparative experiments on its benchmark (9 apps with a total of 45 tasks). The experimental results in the table below show that Optimus-1 outperforms AppAgent and GPT-4 baselines. This reveals that Optimus-1 can generalize to more realistic settings, such as real-world app navigation agents.
Tab 1: Experiments on the benchmark of AppAgent. We report the success rate of the agent in completing 45 tasks.
| Method | Success Rate |
| --- | --- |
| GPT-4 | 48.9% |
| AppAgent | 73.3% |
| Optimus-1 | **86.7%** |
We hope that the results of these experiments can address your concerns, and we would greatly appreciate it if you could consider giving us a higher rating.
---
Rebuttal 2:
Title: Responses to Reviewer mEzT
Comment: Thank you very much for taking the time to discuss with us despite your busy schedule. Regarding your concerns, we respond as follows:
> Q1: How could you illustrate the automatic plan acquisition step more clearly? What is required beforehand (eg a Wiki, in what format?) what are the exact steps (i.e. input and output of each component).
A1:
**Step 1: We randomly select 5 tasks for each Group (7 groups in total) that are not included in the benchmark.**
**Step 2: For each selected task, we use a script to automatically obtain the crafting relationships from the Minecraft Wiki**. The pseudocode for the script is as follows:
```python
def get_information_from_wiki(item):
html = get_html_from_wiki(f"https://minecraft.wiki/w/{item}")
status, recipe = parser_crafting_from_html(html)
if status == "failed": # means atomic item
breaking = parser_breaking_from_html(html)
can_break_tools = parser_tools_from_breaking(breaking)
save_breaking(item, can_break_tools)
return
save_recipe(item, recipe)
for sub_item in recipe:
get_information_from_wiki(sub_item)
# get item's knowledge from wiki
item = "wooden_sword"
get_information_from_wiki(item)
```
Taking the task “craft a wooden sword” as an example, we use the script to automatically obtain the crafting relationships: {1 wooden stick, 2 planks, 1 crafting table → 1 wooden sword}, {1 log → 4 planks}, {2 planks → 4 sticks}, {4 planks → 1 crafting table}.
**Step 3: These relationships are converted into a directed acyclic graph through the script below**.
```python
def get_knowledge_graph(item):
status, recipe = read_recipe(item)
if stauts == "failed":
tools = read_breaking_tools(item)
for tool in tools:
tool_graph[tool][item] = True
return
for sub_item in recipe:
craft_graph[sub_item][item] = recipe.number[sub_item]
get_knowledge_graph(sub_item)
# get item's knowledge graph
item = "wooden_sword"
craft_graph = {}
tool_graph = {}
get_knowledge_graph(item)
# merge 2 graph into an unify knowledge graph
kg = merge(item, craft_graph, tool_graph)
```
By performing a topological sort, the graph can be converted into tuples of materials and their quantities: (wooden sword, 1), (crafting table, 1), (wooden stick, 1) (planks, 8), (log, 2)
**Step 4: We prompt GPT-4 to construct a plan in order from basic materials to advanced materials**:
```python
System Prompt: You are an expert in Minecraft and you can efficiently make plans for me to complete challenging tasks.
User Prompt: I need to complete the task {Task}. Here are the materials needed and their quantities {Materials}.
Please make a feasible plan for me in the order from basic materials to advanced materials:
```
Finally, we get the plan: {1. Get two logs 2. Craft eight planks 3. Craft a crafting table 4. Craft a wooden stick 5. Craft a wooden sword}
We have built an official repository to provide well-structured open-source codes and a project page (to be released upon acceptance). And we will add these implementation details in the Appendix.
> Q2: Does Optimus-1 exceed the capabilities of the teacher alone? How meaningful are these new capabilities?
A2: Actually, 'teacher' refers to the internal knowledge of the Minecraft environment. In the free exploration phase, Optimus-1 acquires basic knowledge through environmental feedback, while in the teacher guidance phase, Optimus-1 obtains advanced knowledge through the automated process described in A1. This knowledge cannot directly transform into the capabilities in Minecraft. So we constructed Optimus-1, which includes modules such as the Knowledge-guided Planner, Experience-Driven Reflector, and Action Controller, to transform this knowledge into the capabilities for the agent to execute long-horizon tasks in Minecraft, through the reasoning and reflection of multimodal large language model.
**Figure 5(b) in the manuscript** shows that Optimus-1's performance continually improves during the multi epoches 'free exploration-teacher guidance' learning process. This indicates that Optimus-1 can utilize past memories (knowledge and experience) to gradually enhance its performance on unseen tasks. **Table 2 in the manuscript** shows that the performance of Optimus-1 significantly decreases after removing such knowledge (e.g., from 9.5% to 1.8% on the Diamond Group). Therefore, knowledge has a significant impact on Optimus-1's capabilities to execute long-horizon tasks.
We hope that these explanation can address your concerns, and we would greatly appreciate it if you could consider giving us a higher rating.
---
Rebuttal 3:
Comment: Thank you for this detailed clarification and answer!
The score is raised from 7 to 8.
We would recommend adding all of the above into the paper to make the paper clearer. Specifically:
1. The comparision against previous work (such as Voyager-1)
2. The detailed explanation of the various steps and how they work and what they involve.
3. Additional discussion above.
---
Rebuttal Comment 3.1:
Title: Responses to Reviewer mEzT
Comment: We deeply appreciate the time and effort you invested in the evaluation of our paper.
We will revise the paper based on your and the Reviewer wzp9's suggestions, including:
1. Add the comparison with existing Minecraft agents (e.g., Voyager) in Appendix E.3.
2. Add a detailed explanation (how to construct and apply) of the HDKG and AMEP in Appendix F.1.
3. Add experimental results in the app agent scenario in the Appendix.
4. Revising the methodology section and other parts.
5. Incorporating clarifications and implementation details discussed with the reviewers into the Appendix.
Additionally, we have built an official repository to provide well-structured open-source codes and a project page (to be released upon acceptance).
Thank you again for your valuable suggestions, which are crucial for improving the quality of our paper. | Summary: The paper tackles the long-horizon tasks in Minecraft by building a pipeline based on multimodal LLM. Specifically, it proposes to store multimodal memory during agent exploration and a knowledge graph that stores the causal relations between objects and tasks. Additionally, a self-reflection mechanism is used to improve the textual actions. Finally, the textual actions are executed by a pre-trained action controller to output the low-level actions.
Strengths: - The core idea is intuitive that multimodal experience can help with long-horizon tasks for embodied agents, where they can be retrieved at test time to help with decision making.
- The paper is overall written clearly and sufficient details are provided in appendix.
Weaknesses: - There seem to be too many individual components in the paper, most of which are claimed to be important components in the paper, which makes it unclear to what extent the claimed contribution generalizes to broader settings beyond Minecraft (or even just beyond the evaluated tasks in Minecraft), as it is also highly plausible that the entire pipeline is carefully designed specifically for the evaluated tasks.
Technical Quality: 3
Clarity: 3
Questions for Authors: See "weaknesses" section above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The limitations are described in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: There seem to be too many individual components in the paper, most of which are claimed to be important components in the paper, which makes it unclear to what extent the claimed contribution generalizes to broader settings beyond Minecraft (or even just beyond the evaluated tasks in Minecraft), as it is also highly plausible that the entire pipeline is carefully designed specifically for the evaluated tasks.
(1) **The core innovative module of Optimus-1 is Hybrid Multimodal Memory**. It includes 1) a novel memory module called Hierarchical Directed Knowledge Graph, which is highly structured and easily updated, enabling concise representation and storage of complex knowledge; 2) a novel method for constructing Abstracted Multimodal Experience Pool that dynamically summarizes long-sequence multimodal information, encompassing both global overviews and local details of multimodal experiences. Building on it, the Knowledge-guided Planner utilizes the HDKG to enhance task planning capabilities, while the Experience-Driven Reflector leverages the AMEP to improve reflection abilities. Both represent improvements over planner and reflector of existing agents.
(2) **Optimus-1 can adapt to different settings and generalize to various Minecraft tasks**, consistently outperforming all baselines across multiple benchmarks [1] [2] [3] (**Figure 8, Table 15, Table 16 in the Appendix**).
(3) **Our proposed Hybrid Multimodal Memory is easily adaptable to other domains**. Take the app agent [4] as an example, the key step involves transforming the knowledge structure from Minecraft's object synthesis relationships into logical relationships between buttons or operations. Once these logical relationships are established, HDKG can easily be adapted to the app environment. As for AMEP, it can be simplified to store the task prompt, image, and action for each atomic operation. Adapting our method to other domains remains as future work.
[1] Wang et al. Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents. 2023.
[2] Qin et al. MP5: A Multi-modal Open-ended Embodied System in Minecraft via Active Perception. 2024.
[3] Wang et al. Voyager: An open-ended embodied agent with large language models. 2023.
[4] Zhang et al. AppAgent: Multimodal Agents as Smartphone Users. 2023.
---
Rebuttal 2:
Title: Response to Reviewer 397y
Comment: ### Additional experimental results
To demonstrate that our Hybrid Multimodal Memory is easily adaptable to other domains, we applied Optimus-1 to the app agent scenario. We followed the environment and settings of AppAgent [1] and conducted comparative experiments on its benchmark (9 apps with a total of 45 tasks). The experimental results in the table below show that Optimus-1 outperforms AppAgent and GPT-4 baselines. This reveals that Optimus-1 can generalize to more realistic settings, such as real-world app navigation agents.
Tab 1: Experiments on the benchmark of AppAgent. We report the success rate of the agent in completing 45 tasks.
| Method | Success Rate |
| --- | --- |
| GPT-4 | 48.9% |
| AppAgent | 73.3% |
| Optimus-1 | **86.7%** |
We hope that the results of these experiments can address your concerns, and we would greatly appreciate it if you could consider giving us a higher rating.
We welcome the reviewer to continue the discussion with us during the discussion phase. If you have any further questions, please feel free to contact us.
P.S.: We just realized that it was not visible to you due to a setting error. We sincerely apologize for any inconvenience this may have caused and for not responding to your question in a timely manner.
---
Rebuttal 3:
Title: Response to Reviewer 397y
Comment: We would be grateful if you could take time out of your busy schedule to discuss with us. We are very keen to engage in deeper discussions with the reviewers.
**We further conducted experiments on AitW [1]**. We followed the environment settings of AppAgent and conducted comparative experiments on AitW. **As shown in Table below, Optimus-1 outperforms the PaLM 2, GPT-4V and AppAgent.** AitW is a popular, general benchmark that can demonstrate an agent's ability to operate apps in real-world scenarios. So the experimental results we provided below are sufficient to demonstrate Optimus-1's generalisation in real-world scenarios. We will include these results in the Appendix. Additionally, we have built an official repository to provide well-structured open-source codes and a project page (to be released upon acceptance).
Tab 1: Experiments on the subset of AitW. We report the partial match scores for AitW Standard split.
| Method | Match Scores |
| --- | --- |
| PaLM 2 | 39.6 |
| GPT-4V | 50.5 |
| AppAgent | 52.4 |
| Optimus-1 | **58.3** |
We hope that the results of these experiments can address your concerns, and we would greatly appreciate it if you could consider giving us a higher rating.
[1] Rawles et al. Android in the Wild: A Large-Scale Dataset for Android Device Control. 2023.
---
Rebuttal Comment 3.1:
Title: Response
Comment: Thank you for the response, and I appreciate the efforts for the new experiments. These have addressed my concerns and I'm raising my recommendation accordingly.
---
Reply to Comment 3.1.1:
Title: Response to Reviewer 397y
Comment: Thanks for your reply! We are pleased to have addressed your concerns. We will add these experiments in the Appendix. Additionally, we have built an official repository to provide well-structured open-source codes and a project page (to be released upon acceptance). | Rebuttal 1:
Rebuttal: **Response to all Reviewers**
We would like to thank all reviewers (#3HCc, #397y, #mEzT, #AfEc, #wzp9) for their time and efforts in providing constructive feedback. We are very encouraged that the reviewers found the manuscript is well-written and easy to follow (R#397y, R#AfEc), proposed Hybrid Multimodal Memory is novel (R#3HCc, R#AfEc, R#397y, R#mEzT, R#wzp9), proposed Optimus-1 outperforms over prior state-of-the-art (R#3HCc, R#mEzT, R#AfEc, R#wzp9), with comprehensive experiments (R#mEzT, R#AfEc). We have built an official repository for providing well-structured open-source codes (released upon the acceptance).
We have responded to your questions and comments inside each individual review. We hope these responses will offer a more thorough understanding of our paper. If your concerns have been resolved, we would greatly appreciate it if you could consider giving us a higher rating. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: Optimus-1 introduces an innovative Hybrid Multimodal Memory module that combines a Hierarchical Directed Knowledge Graph (HDKG) and an Abstracted Multimodal Experience Pool (AMEP) to address knowledge and experience management in long-horizon tasks.
Strengths: * Proposes a hybrid multimodal memory module, which includes HDKG and AMEP, offering an innovative approach to managing knowledge and experience in long-horizon tasks.
* Experimental results demonstrate that Optimus-1 significantly outperforms all existing agents on long-horizon tasks and achieves near-human-level performance in many tasks.
* The HDKG maps knowledge into a directed graph structure, enabling the agent to efficiently retrieve and utilize knowledge without needing to update parameters.
* The AMEP summarizes not only successful cases but also failure cases, significantly enhancing the agent's learning effectiveness.
Weaknesses: * Improving long-horizon task performance through memory modules is common in LLMs; applying this directly to VLMs is not very novel.
* Reaction time and decision efficiency might be issues; the current experimental results still show a large number of steps.
* Although it performs well in the Minecraft environment, its performance in real-world applications, such as software manipulation and web navigation, has not been verified.
* As new knowledge and tasks emerge, effectively updating and maintaining the knowledge (HDKG) remains a challenge.
Technical Quality: 2
Clarity: 3
Questions for Authors: See weaknesses
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: Improving long-horizon task performance through memory modules is common in LLMs; applying this directly to VLMs is not very novel.
A1: **Incorporating multimodal memory into VLMs presents significant challenges compared to apply unimodal memory to LLMs.** Long-horizon tasks require the model to save and utilize past information, which is particularly crucial in a multimodal environment. The diverse structures and characteristics of information from different modalities make it difficult to effectively preserve multimodal historical information over extended time periods.
**To address this, we propose a novel Hybrid Multimodal Memory module that incorporates structured knowledge and multimodal experiences, implementing the dynamic summarization of multimodal information to reduce storage and retrieval costs.** It includes 1) a novel memory module called Hierarchical Directed Knowledge Graph, which is highly structured and easily updated, enabling concise representation and storage of complex knowledge; 2) a novel method for constructing Abstracted Multimodal Experience Pool that dynamically summarizes long-sequence multimodal information, encompassing both global overviews and local details of multimodal experiences.
**Table 7 in the manuscript** shows that existing agents do not incorporate memory modules with both knowledge and experience, resulting in inferior performance on long-horizon tasks compared to Optimus-1 (**Table 1, Table 15, Table 16 in the manuscript**).
> Q2: Reaction time and decision efficiency might be issues; the current experimental results still show a large number of steps.
A2: (1) For reaction time, **Table 1 in the manuscript** shows that Optimus-1's average task completion time (AT) is significantly lower than other baselines and approaches human-level performance. For example, on the Wood Group, Optimus-1 takes an average of **47** seconds, human baseline takes **31** seconds, whereas DEPS [1] takes **85** seconds.
(2) For decision efficiency, unlike existing agents [1] [2] [3] that require multiple interactions with (M) LLMs for task planning, Optimus-1 completes task planning with a single interaction, thus achieving much higher decision efficiency than existing agents. For example, to complete the planning of task “craft iron pickaxe”, MP5 [3] requires **11 interactions** with MLLM, whereas Optimus-1 completes the planning in just **one interaction**.
(3) In MineRL environment, 'steps' refers to the number of interactions between the agent and the environment, occurring at a frequency of 20 times per second. For example, if an agent takes 2 seconds to complete the task “chop a tree”, it interacts with the environment 40 times, resulting in a recorded steps number of 40. **Table 1 in the manuscript** shows that Optimus-1's average task completion step (AS) is significantly lower than other baselines.
> Q3: Although it performs well in the Minecraft environment, its performance in real-world applications, such as software manipulation and web navigation, has not been verified.
A3: (1) **Minecraft is a valuable and representative environment for evaluating long-horizon tasks, offering greater diversity and complexity compared to other environments**. Unlike web/app navigation [4] and embodied manipulation [5], Minecraft is an open world with a complex and dynamic environment (79 biomes, including ocean, plains, forest, desert, etc.). To complete long-horizon tasks, agents must achieve multiple sub-goals (e.g., 15 sub-goals to craft a diamond sword), making the construction of a Minecraft agent quite challenging. Many studies [2] [3] [6] have chosen Minecraft as the environment for validating performance on long-horizon tasks. Extensive experimental results (**Table 1, Table 15, Table 16, Figure 8 in the manuscript**) show that Optimus-1 outperforms all baselines. Therefore, conducting experiments in the Minecraft environment is sufficient to demonstrate the contributions of this paper.
(2) **Our proposed Hybrid Multimodal Memory is easily adaptable to other domains**. Take the app agent [4] as an example, the key step involves transforming the knowledge structure from Minecraft's object synthesis relationships into logical relationships between buttons or operations. Once these logical relationships are established, HDKG can easily be adapted to the app environment. As for AMEP, it can be simplified to store the task prompt, image, and action for each atomic operation. Adapting our method to other domains remains as future work.
> Q4: As new knowledge and tasks emerge, effectively updating and maintaining the knowledge (HDKG) remains a challenge.
A4: **Our HDKG can be efficiently updated and expanded**. When adding new nodes, the HDKG can be updated by simply merging the nodes and relationships into the graph. This method involves local linear modifications to the graph rather than altering the entire graph, making the process efficient and time-saving. For example, when M new nodes and N edges are added, the HDKG can be updated with M+N times of operations. Moreover, an HDKG containing 851 objects (nodes) requires less than 1 MB of memory. Thus, the HDKG can be efficiently updated and maintained.
[1] Wang et al. Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents. 2023.
[2] Wang et al. Voyager: An open-ended embodied agent with large language models. 2023.
[3] Qin et al. MP5: A Multi-modal Open-ended Embodied System in Minecraft via Active Perception. 2024.
[4] Zhang et al. AppAgent: Multimodal Agents as Smartphone Users. 2023.
[5] Jiang et al. VIMA: General Robot Manipulation with Multimodal Prompts. 2023.
[6] Baker et al. Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos. 2022.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thanks for authors' rebuttal. I would will change my score to 4. I look forward to seeing some experiments in software manipulation or web navigation based on Optimus-1 framework and also open to further modify the score.
---
Reply to Comment 1.1.1:
Title: Responses to Reviewer 3HCc
Comment: Thank you very much for taking the time to discuss with us despite your busy schedule. Regarding your concerns, we respond as follows:
Q1: I look forward to seeing some experiments in software manipulation or web navigation based on Optimus-1 framework and also open to further modify the score.
A1: To address your concerns about the generalisation of our method in real-world scenarios, we applied Optimus-1 to the app agent scenario. We followed the environment and settings of AppAgent [1] and conducted comparative experiments on its benchmark (9 apps with a total of 45 tasks). The experimental results in the table below show that Optimus-1 outperforms AppAgent and GPT-4 baselines. This reveals that Optimus-1 can generalize to more realistic settings, such as real-world app navigation agents.
Tab 1: Experiments on the benchmark of AppAgent. We report the success rate of the agent in completing 45 tasks.
| Method | Success Rate |
| --- | --- |
| GPT-4 | 48.9% |
| AppAgent | 73.3% |
| Optimus-1 | **86.7%** |
We hope that the results of these experiments can address your concerns, and we would greatly appreciate it if you could consider giving us a higher rating. If you have any further questions, please feel free to contact us.
[1] Zhang et al. AppAgent: Multimodal Agents as Smartphone Users. 2023.
---
Rebuttal 2:
Title: Responses to Reviewer 3HCc
Comment: Thank you very much for taking the time to discuss with us despite your busy schedule. Regarding your concerns, we respond as follows:
> Q: AppAgent can also achieve the similar results with "Watching Demos". Would you like to explore the performance of Optimus-1 in AitW[1] and Osworld[2], it would be more challenging and persuasive.
A: (1) **The experimental results in the Table below show that Optimus-1 outperforms AppAgent, even when compared to AppAgent with Watching Demos.**
Tab 1: Experiments on the benchmark of AppAgent. We report the success rate of the agent in completing 45 tasks.
| Method | Success Rate |
| --- | --- |
| GPT-4 | 48.9% |
| AppAgent-Auto. Exploration | 73.3% |
| AppAgent-Watching Demos | 84.4% |
| Optimus-1 | **86.7%** |
(2) **We further conducted experiments on AitW [1]**. We followed the environment settings of AppAgent and conducted comparative experiments on AitW. **As shown in Table below, Optimus-1 outperforms the PaLM 2, GPT-4V and AppAgent.** It is important to note that our method is train-free, and due to time constraints, the experiments were conducted under insufficient “free exploration-teacher guidance” learning conditions. So it is unfair to compare it to baselines that fine-tuned on AitW dataset. We will include these results in the Appendix. Additionally, we have built an official repository to provide well-structured open-source codes and a project page (to be released upon acceptance).
Tab 2: Experiments on the subset of AitW. We report the partial match scores for AitW Standard split.
| Method | Match Scores |
| --- | --- |
| PaLM 2 | 39.6 |
| GPT-4V | 50.5 |
| AppAgent | 52.4 |
| Optimus-1 | **58.3** |
(3) Since conducting experiments on OSWorld [2] requires a virtual machine and the environment configuration is quite complex, we are unable to provide experimental results for Optimus-1 on OSWorld in such short time. However, AitW is a popular, general benchmark that can demonstrate an agent's ability to operate apps in real-world scenarios. So the experimental results we provided above are sufficient to demonstrate Optimus-1's generalisation in real-world scenarios. We would like to provide experiments in more scenarios in future versions to demonstrate the generalisation of the proposed method. Additionally, we have built an official repository to provide well-structured open-source codes and a project page (to be released upon acceptance).
We hope that the results of these experiments can address your concerns, and we would greatly appreciate it if you could consider giving us a higher rating.
[1] Rawles et al. Android in the Wild: A Large-Scale Dataset for Android Device Control. 2023.
[2] Xie et al. OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments. 2024.
---
Rebuttal Comment 2.1:
Title: Response to Authors
Comment: Thanks for the experiments results. Would you like to share the implementation code and comprehensive trajectory records in an anonymous link, I will check with it and increase my score if the implementation is correct.
---
Reply to Comment 2.1.1:
Title: Responses to Reviewer 3HCc
Comment: Thank you very much for taking the time to discuss with us despite your busy schedule. We would like to share the code with you, but we have noticed that NeurIPS's rebuttal guideline clearly states, 'All the texts you post (rebuttal, discussion, and PDF) should not contain any links to external pages.' Therefore, we will discuss the feasibility of sharing the code link with the area chair, and we hope you can understand our concerns.
Moreover, we would be grateful if you could focus more on our contribution of proposing the hybrid multimodal memory module and constructing the agent Optimus-1, which outperformed all powerful baselines in executing long-horizon tasks in Minecraft. This has been acknowledged by all reviewers (#3HCc, #397y, #mEzT, #AfEc, #wzp9).
To address your concerns, we have done our best to verify the effectiveness of the proposed method on the AppAgent benchmark and supplemented experiments on AitW at your additional request. These experimental results demonstrate the generalization of Optimus-1 in general scenarios. We sincerely hope you can see the efforts and sincerity we put into addressing your concerns. We have built an official repository to provide well-structured open-source codes and a project page (to be released upon acceptance).
Additionally, we are pleased to have addressed the concerns of other reviewers (#397y, #mEzT, #wzp9), and they have raised their scores. We would greatly appreciate it if you could take these aspects into consideration and give us a higher score.
Thanks again for the time and effort you invested in the evaluation of our paper. | null | null | null | null | null | null |
MoLE: Enhancing Human-centric Text-to-image Diffusion via Mixture of Low-rank Experts | Accept (poster) | Summary: This paper aims to address the quality issues in human-centric text-to-image generation by constructing a large-scale dataset of millions of portraits. Additionally, the authors propose training experts for generating facial and hand details and efficiently integrate these experts using the MoE architecture into diffusion models. This integration enhances the realism of the final outputs in terms of facial and hand details.Extensive visualizations and analyses demonstrate the effectiveness of the proposed approach in this paper.
Strengths: +This paper is well-written, clearly expressing the motivation and key contributions.
+The proposed Mixture of Low-rank Experts on text-to-image diffusion model framework is novel. To the best of my knowledge, this is the first work to try MoE learning strategy to enhance the human generation quality in diffusion models.
+The contributions are relatively substantial, especially with the author's collection of a publicly available high-quality human dataset. This dataset's availability will advance the community in deeper research into human generation.
+The visualized comparisons are comprehensive and demonstrate the advantages of the MoE architecture in diffusion models.
Weaknesses: Overall, I am quite satisfied with this paper, but I still have some questions or concerns, as detailed below:
-- The text description process is inadequate, particularly regarding the manual filtering of data generated by the LLaVA model, which lacks any detailed description. As far as I know, LLaVA is trained on general image-text pairs, so there should be many hallucinated descriptions in the process of generating human image descriptions. How did the authors address this issue? Additionally, why did the authors not consider using some human/clothing attribute prediction methods and then use the generated attributes as prompts for the MLLM to make the results more reliable?
--The visualized comparison results presented by the authors are mostly half-body portraits. Could the authors show some full-body comparison results? In these images, the face and hand areas are smaller, increasing the probability of poor quality and making them more challenging to fix
Technical Quality: 3
Clarity: 4
Questions for Authors: Most of my questions have been listed in the "weaknesses" section. Please respond to the above questions.
Additionally, to better evaluate its generalization capability, have the authors considered applying the MoE architecture to transformer-based diffusion models, such as PixArt[1]?
[1] Junsong Chen, Jincheng Yu and etc. PixArt-$\alpha$: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis. ICLR, 2024.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: See the "weakness" and "questions" section.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comment, which is truly invigorating and encouraging! Below is our pointwise response, hoping to address your concerns.
**Q1:** The text description process is inadequate, particularly regarding the manual filtering of data generated by the LLaVA model.
**A1:** Sorry for the confusion. We take an image and a prompt "Describe this image in one sentence with details" as input for LLaVA to generate the caption of the image. Afterward, we will streamline the long LLaVA caption manually. Specifically, if the caption is very long, we will streamline it with a new shorter caption by ourselves while aligning with the content of the image. We also remove unrelated and uninformative text patterns including "creating ...", "demonstrating .....", etc. For example, if the caption contains, e.g., "creating a peaceful atmosphere", we will remove it to make the model focus on informative words more. We will add a detailed description of the process in our paper.
**Q2:** How did the authors address the issue of hallucinated descriptions. Why did the authors not consider using some human/clothing attribute prediction methods and then use the generated attributes as prompts for the MLLM to make the results more reliable?
**A2:** To alleviate this issue, we use CLIP to filter image-text pairs with lower scores. By doing so, we can effectively filter the hallucinated descriptions whose content does not appear in the image.
For the second question, we also appreciate the reviewer's suggestion of using human/clothing attribute models. In our preliminary experiments, we qualitatively compared several CLIP-filtered LLava-generated captions with those generated by attribute models. Both approaches produced captions rich in detail and similar in their expression of global semantics. However, while the attribute model captions were more detailed due to the nature of these models, they occasionally appeared somewhat awkward and less natural compared to the normal descriptions of human habits. (For example, "A full-body shot, an Asian adult female, outdoor, black straight above chest hair, a black silk shirt" **vs.** "an Asian adult female with black straight hair falling just above her chest, wearing a black silk shirt."). Another potential issue is that the attribute model may ignore the behavior of a person in an image, e.g., running and reading. Therefore, we opted not to use them in our paper, also considering that the former method was simpler to implement. Regardless, we appreciate the reviewer's valuable suggestion and will consider it to combine with our current method for future work.
**Q3:** Could the authors show some full-body comparison results?
**A3:** Yes, we show several full-body comparison results based on MoLE (SD v1.5) and MoLE (SDXL) in Fig.4 of the attached PDF file. One can see that even in the case of full-body images where the face and hand areas are smaller, MoLE still can refine these parts. We thank the reviewer's suggestion for this and will add this result to our paper.
**Q4:** To better evaluate its generalization capability, have the authors considered applying the MoE architecture to transformer-based diffusion models, such as PixArt?
**A4:** We thank the reviewer's suggestion. To verify the generalization of our method, we attempt to build our MoLE based on PixArt-XL-2-512x512. To compare the performance, we randomly sample 3k prompts from COCO Human Prompts and calculate HPS for MoLE (PixArt) and PixArt.
The evaluation process is repeated three times. Our method achieves $21.79 \pm 0.03$ HPS(%) and outperforms PixArt ($21.33 \pm 0.08$ HPS). These results demonstrate the generalization of our method. We are willing to cite and add the discussion to our paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. The response have addressed most of my concerns. Thus, I keep the "Accept" rating. | Summary: This paper enhances these Text-to-image diffusion models by introducing a curated dataset of over a million human-centric images and a novel method, MoLE, which utilizes specialized low-rank modules to improve facial and hand image quality in diffusion processes.
If the rebuilt dataset can be race-balanced and the approach could promote racial balance, I will be glad to raise my score.
Strengths: 1. The paper proposes a new human-centric dataset to enhance the human-centric generations, which is interesting.
2. Extensive experiments demonstrate the effectiveness of this approach.
3. The presentation is clear and easy to follow.
Weaknesses: 1. Stable Diffusion has introduced race biases when generating images. MoLE focuses on the generation qualities of faces and hands. If MoLE is beneficial to alleviate race biases with the reconstruction of the human-centric dataset, it will be more impactful like [1].
2. Limb deformation is also a big question in Stable Diffusion[2][3]. With the introduction of many human images featuring limbs in this dataset, it is crucial to assess whether this approach can effectively mitigate these deformations.
3. The generation of two global scalars may be tricky (Lines 180-182), current experiments show less evidence about the effects of the global scalars.
[1] ITI-GEN: Inclusive Text-to-Image Generation
[2] HumanSD: A Native Skeleton-Guided Diffusion Model for Human Image Generation.
[3] Towards Effective Usage of Human-Centric Priors in Diffusion Models for Text-based Human Image Generation.
Technical Quality: 3
Clarity: 3
Questions for Authors: 4. From lines 88-89, the main race is the white. Could the authors provide an analysis of the ratios of different races in one prompt (A Beautiful Woman)? Also, could the authors provide a discussion that if the ratios of different races are the same, will the generations of different races be improved?
5. The values of S in Figure 1 are more arbitrary to decide the qualities of a generation, so how to decide these values in MoLE?
6. Since the dataset is considered to be part of the innovation, will the dataset be released in the future?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The negative board impacts are insufficient in the Conclusion. The authors should analyze whether their approach will introduce more biases on race and other impacts, such as fake faces.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the encouraging comment! Below is our pointwise response, hoping to address your concerns.
**Q1:** An analysis of the ratios of different races in one prompt (A Beautiful Woman)? Provide a discussion that if the ratios of different races are the same, will the generations of different races be improved?
**A1:** We follow the reviewer's suggestion and use the prompt "A Beautiful Woman" to show the ratios of different races in MoLE. Specifically, we generate 10K images with this prompt. With the help of DeepFace (https://github.com/serengil/deepface), we find that approximately 51.08% individuals identify as white, 5.29% as Asian, 10.18% as Black, 4.31% as Indian, 24.66% as Latino Hispanic, and 4.48% as Middle Eastern.
To verify if the generation of different races can be improved by using a race-balanced dataset, based on our dataset we use DeepFace to reconstruct a new dataset with the same ratio of different races. The newly created dataset comprises 30K images. We use it to train a MoLE model and generate 10K images using the same prompt "A Beautiful Woman". We find that approximately 45.56% individuals identify as white, 7.17% as Asian, 8.32% as Black, 13.41% as Indian, 13.44% as Latino Hispanic, and 12.10% as Middle Eastern. This result has a relatively higher balance of races compared to the previous result, demonstrating that MoLE is beneficial to alleviating race biases with the reconstructed dataset. We are willing to cite and add this discussion to our paper.
**Q2:** With the introduction of many human images featuring limbs in this dataset, assess whether this approach can effectively mitigate limb deformations.
**A2:** To assess whether MoLE can effectively mitigate limb deformations, we perform a user study by sampling 20 image pairs from SD v1.5 and MoLE, and inviting 50 participants to evaluate which model produces better human limbs with less deformations. Among the participants, we find that 62% of participants select MoLE, which indicates that MoLE also can effectively mitigate limb deformations.
**Q3:** The generation of two global scalars may be tricky (Lines 180-182), current experiments show less evidence about the effects of the global scalars.
**A3:** We feel sorry for the confusion. Actually, we have shown the efficacy of global scalars in our ablation study of **Mixture Assignment** in Tab 3 where only employing global assignment can also improve the performance compared to SD v1.5 (as well as the model in Stage 1). Moreover, in (a) and (b) of Fig 8, we can see that global assignment is content-aware. For example, when generating a close-up image, e.g., a face image, the global assignment consciously produces large global scalars for the face expert and small global scalars for the hand expert. From this perspective, the global assignment is meaningful.
**Q4:** The values of S in Figure 1 are more arbitrary to decide the qualities of a generation, so how to decide these values in MoLE?
**A4:** In MoLE, we use a mechanism called **Soft Mixture Assignment** to determine these values. This mixture assignment contains two parts: global assignment and local assignment. Specifically, the global assignment takes as input the entire feature map to produce adaptive global scalers, which are allocated to each expert. Additionally, we introduce the local assignment, which takes as input each token to produce local scalers that similarly determine how much weight for each token sent to the experts.
**Q5:** Since the dataset is considered to be part of the innovation, will the dataset be released in the future?
**A5:** Yes, we are very willing to release this dataset to advance the development of our community. Moreover, we promise to select a suitable license to ensure compliance with legal regulations and emphasize its exclusive use for academic purposes only.
**Q6:** The negative board impacts are insufficient in the Conclusion. The authors should analyze whether their approach will introduce more biases on race and other impacts, such as fake faces.
**A6:** We thank the reviewer's advice. Through our analysis in **A1** above, though MoLE may not introduce more biases on race, it also inherits the biases in the training data like pervious methods. As for other impacts such as fake faces, since our method primarily focuses on human-centric image generation, it also inevitably generates fake faces like other SD models, which requires users to leverage these generated images carefully and legally. These issues also warrant further research and consideration. We maintain transparency in our methods with open-source code and dataset composition, allowing for continuous improvement based on community feedback. We will highlight these discussions in the Broader Impact part.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. Most of my questions are well-discussed, but the answer to A1 shows that MoLE can only alleviate the bias a little even with a race-balanced dataset. So, I'll keep my score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 5LWQ
Comment: We appreciate Reviewer 5LWQ's response. We think the limited effectiveness of MoLE in alleviating bias may be due to the vast size of the imbalanced training data (2 billion from LAION and 1 million from our own dataset), compared to the 30K samples in our race-balanced data, which constrains its impact. We believe that if we increase the amount of race-balanced data, our approach could further mitigate the race issue. We are actively working on collecting more race-balanced data and are working hard to alleviate this bias issue further. | Summary: This paper aims to explore human-centric text-to-image generation, particularly in the context of faces and hands, the results often fall short of naturalness due to insufficient training priors. The authors alleviate the issue from two perspectives. 1) The authors collect a human-centric dataset with two specific sets of close-up images of faces and hands. 2) The authors propose Mixture of Low-rank Experts (MoLE) method by considering low-rank modules trained on close-up hand and face images respectively as experts.
Strengths: 1)This paper constructs a human-centric dataset comprising over one million high-quality human-in-the-scene images and two specific sets of close-up images of faces and hands. These datasets collectively provide a rich prior knowledge base to enhance the human-centric image generation capabilities of the diffusion model.
2)This paper proposes a simple yet effective method called Mixture of Low-rank Experts (MoLE) by considering low-rank modules trained on close-up hand and face images respectively as experts.
3)The paper is well-written and easy to follow. The dataset and benchmark of this paper are open source.
Weaknesses: 1) As the author mentioned, the soft mechanism is built on the fact that each token can adaptively determine how much (weight) should be sent to each expert by the sigmoid function. So, it will be better if the author can provide and analyze the distribution of the weight sent to each expert by the sigmoid function.
2) There are some minor writing errors in the paper, such as "is is" on line 208. The author needs to carefully check the manuscript.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please check the weakness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations of the proposed dataset and method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comment! We hope our response presented below can address your concerns.
**Q1:** Provide and analyze the distribution of the local weight sent to each expert by the sigmoid function.
**A1:** We thank the reviewer's advice. We provide the distribution of the local weight sent to each expert in Fig.3 of the attached PDF file. To obtain this, we generate 10 samples for close-up images and normal human images, respectively, and collect local weights for each expert. In Fig.3, one can see that for close-up images, e.g., face, the corresponding expert receives more weights with higher values. We think this effectively demonstrates the efficacy of the soft assignment mechanism in MoLE, which adaptively activates the relevant expert to contribute more to the generation of close-up images. When generating normal human images involving face and hand, the two experts contribute equally, and generally, the face expert receives relatively more weights with higher values as the area of the face is typically larger than that of the hand. We will add this result and discussion in our paper.
**Q2:** Some minor writing errors in the paper
**A2:** Thanks for pointing out the typos. We will check our manuscript carefully and fix them.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response.
My concerns are well-discussed, but considering the scores of other reviewers, I decide to keep my score. | Summary: The authors propose a large-scale dataset for human image generation, comprising over one million images, including close-up face and hand subsets. They also introduce MoLE (Mixture of Low-rank Experts), a novel framework that utilizes two low-rank experts to learn face and hand representations. This approach shows promise for generating high-quality human images with precise control over face and hand features. Overall, the proposed dataset and method are a valuable contribution to the field of human image generation.
Strengths: 1. A large-scale human-centric dataset is proposed along with two close-up face and hand subsets.
2. A MoLE framework with two experts is proposed which is novel and interesting.
Weaknesses: 1. Comparison with existing datasets: While the proposed dataset is a significant contribution, a thorough comparison with established datasets like CosmicMan is essential to contextualize its value.
2. Comparison with state-of-the-art methods: The paper primarily focuses on generating realistic faces and hands, but it lacks a comprehensive comparison with relevant methods like HanDiffuser and HyperHuman. Comparing only with Stable Diffusion (SD) may not provide a complete picture. It would be beneficial to train existing methods on the proposed dataset and compare the results with the proposed method.
3. Ablation study on experts: It would be interesting to see the results of training only one expert compared to training two experts, to understand the impact of using multiple experts on the model's performance.
Technical Quality: 3
Clarity: 3
Questions for Authors: My main concern is the comparison with existing datasets and methods. Could you provide more details about it?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent', 'Ethics review needed: Data quality and representativeness']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comment! Below is our pointwise response, hoping to address your concerns.
**Q1:** Comparison with existing datasets like CosmicMan is essential to contextualize its value.
**A1:** We thank the reviewer's advice. Below, we give a comparison of the differences between CosmicMan and our newly collected dataset, which primarily lie in four aspects:
- From the aspect of image diversity, due to different motivations, CosmicMan only contains human-in-the-scene images while our dataset also involves two close-up datasets for face and hand, respectively. Moreover, to the best of our knowledge, the high-quality close-up hand dataset is absent in prior related studies.
- From the aspect of image content distribution, there is a relatively severe gender imbalance in CosmicMan where female makes up a large proportion around 75% (Please see Fig 3 of Appendix in its paper) while our dataset is relatively balanced (58% vs 42%).
- From the aspect of image size, though CosmicMan and our dataset are both of high quality, our collected images (basically over 1024 × 2048) are relatively larger than CosmicMan whose average size is 1488 × 1255.
- From the aspect of data sources, our dataset is legally collected from various websites including unsplash.com, gratisography.com, morguefile.com, and pexels.com, etc, while CosmicMan is sourced from LAION-5B (See https://huggingface.co/datasets/cosmicman/CosmicManHQ-1.0 ). What sets our dataset apart is not just its wide collection, but also the freshness of the data. As a trade-off, the quantity of our dataset (1M) is relatively smaller than that of CosmicMan (5M).
We will add this comparison to our paper to highlight the value of our dataset.
**Q2:** Comparison with state-of-the-art methods like HanDiffuser and HyperHuman.
**A2:** We thank the reviewer's advice. Regrettably, we find that the code for HanDiffuser (https://supreethn.github.io/research/handiffuser/index.html) and HyperHuman (https://github.com/snap-research/HyperHuman) has not been made available. As a result, we are unable to directly compare these methods with our work. We attempt to reimplement HyperHuman (it is relatively simpler) based on our understanding of the paper. However, due to our constraints in time and computational resources, we were unable to complete the reimplementation. Hence, we resort to a user study. Specifically, we invite 50 participants to compare the visualization presented in the two methods' papers with our generated images, respectively. In the user study, we prepare 10 MoLE-HyperHuman pairs and ask participants to select the best one from each pair according to their preference in terms of hand quality. Some compared images are presented in Fig.1 and Fig.2 of the attached PDF file to show the differences between our generated images and theirs. The results show that 58% of participants think our generated images are better than that of HyperHuman. Similarly, for HanDiffuser, we also prepare 10 MoLE-HanDiffuser pairs and ask participants to select the best one. We find that 48% of participants vote for MoLE, slightly inferior to HanDiffuser (52%). All these results demonstrate that our method is effective and competitive with the state-of-the-art methods. More importantly, our method is user-friendly because both HanDiffuser and HyperHuman rely on additional conditions to enhance human and hand generation: HyperHuman takes text and skeleton as input; HanDiffuser needs text, a SMPL-H model, camera parameters, and hand skeleton. In contrast, MoLE only relies on text without the need for any additional conditions, offering greater flexibility and ease of use while maintaining competitive performance.
**Q3:** Ablation study on experts to see the results of training only one expert compared to training two experts.
**A3:** Following the reviewer's advice, we train only one expert and compare the performance with that of two experts. We find that one expert achieves $20.19 \pm 0.03$ HPS(%), inferior to that of two experts ($20.27 \pm 0.07$ HPS), which demonstrates the necessity of using one expert for face and hand, respectively.
We hope that our responses above can address your concerns, and turn your assessment to the positive side. If you have any questions, please let us know during the rebuttal window. We appreciate your suggestions and comments! | Rebuttal 1:
Rebuttal: ## General Response
Thank all reviewers for their time and effort in reviewing our paper. We also thank all reviewers for their valuable feedback, which is instrumental in enhancing the quality of our work. We hope our pointwise responses below can clarify all reviewers’ confusion and alleviate all concerns. **We add the visualization materials of our rebuttal in the attachment.**
Thank all reviewers’ time again.
Pdf: /pdf/44267df3fbda2f8d1c869200c873bdf36d5190b1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Maia-2: A Unified Model for Human-AI Alignment in Chess | Accept (poster) | Summary: The paper introduces a unified modeling approach called Hermes for aligning human and AI performance in chess. Hermes effectively captures human chess styles across different skill levels, providing a coherent reflection of player improvement. The approach incorporates a skill-aware attention mechanism that dynamically combines player skill levels with encoded chess positions, allowing the model to adapt to varying player skills. Experimental results demonstrate that this unified framework significantly enhances the alignment between AI and human players, facilitating a deeper understanding of human decision-making and the development of AI-guided teaching tools.
Strengths: 1. This paper is easy to read and follow, with detailed descriptions and accompanying code.
2. The method is simple yet effective, demonstrating remarkable results in various settings.
Weaknesses: 1. The method is actually straightforward, as it only employs the multi-head attention mechanism, a general technique applicable in various settings and applications.
2. The motivation for using attention and the reasons for its effectiveness are unclear.
3. The concept of skill used in this paper is not well-defined, which may limit its applicability to the chess environment only.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How is skill defined in this paper? Is it manually designed or given by the chess setting? How does it handle this setting, and why can't we implicitly obtain different skill levels?
2. How could this method be expanded to settings other than chess, such as those used in multi-agent systems[1]?
3. How can this method be expanded to different chess variants or adapted for a large-scale chess model?
Ref:
[1] Yuan L, Zhang Z, Li L, et al. A survey of progress on cooperative multi-agent reinforcement learning in open environment[J]. arXiv preprint arXiv:2312.01058, 2023.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: A/N
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Design choices (W2)**
We agree that we didn’t sufficiently explain our architecture choice. The rationale behind our design is that each channel (feature map) of the ResNet output represents different aspects of a chess position, and the attention blocks actively select and interact with the features according to the given skill level. Evidence can be found in Figure 4: for skill-dependent concepts, the representation before attention blocks understands them uniformly across all skill levels with high accuracy, whereas after the attention blocks the representation is attenuated with higher skill levels understanding the concepts better than lower skill levels. This shows that attention blocks effectively “pretend” not to know the concepts to model the degree of imperfection in human moves, whereas skill-independent concepts are understood similarly by the representations before and after the attention block. Table 3 “w/o Att” shows that simply concatenating the skill embeddings with the flattened ResNet outputs did not work well. Thus, a more sophisticated way of conditioning is needed.
**Contribution (W1)**
To the best of our knowledge, we are the first to emphasize **coherence** in human behavior modeling. In particular, our work introduces the first human behavior model that is coherent across various skill levels and even improves the move-matching accuracy.
The primary methodological contribution lies in the **unified modeling** approach for **coherent** human behavior modeling that is enabled by this specifically designed model architecture. While we believe that the architecture has conceptual benefits that help make these contributions possible, and we will include the rationale behind the architecture in the revisions, we do not claim that the architecture is optimal in any sense; we view the specifics of the architecture as secondary to the advances, i.e., move prediction coherence and accuracy, provided by the unified modeling approach. We will also mention other architecture choices as promising avenues for future work.
**Definition of Skill (W3, Q1)**
The skill level is defined by the Elo rating system, which was originally proposed for two-player zero-sum games [1] and widely used in Chess. Each player is associated with an Elo rating before starting a game, and the Elo rating will be updated after the game. We use the Elo rating before the game as annotated labels for skill levels.
Extracting skill embeddings from past moves is another research question, which gives individual skill embeddings instead of grouped skill embeddings. The performance can be promising if sufficient past moves are provided [2]. We see this as a promising direction for future work and will add it to the paper.
**Generalization to other domains (Q2)**
Elo rating systems now become widely applicable in many domains such as LLMs [3]. Besides Elo, our categorical skill-level modeling can be easily adapted to any continuous rating or discrete grading system. For example, proficiency in math problem-solving.
Unlike multi-agent systems, we treat this setting as effectively single-agent, where the chess engine itself will predict human moves without interacting with any other agents. Although Hermes itself is not designed as a multi-agent system, it could be extended to multi-agent systems in the broader context of studying interactions between human-like agents to achieve specific goals, in particular competitions among agents.
**Adapting to Chess Variants (Q3)**
It is fairly easy to apply our method for chess variants as long as the human historical data is sufficient and the skill level is provided. In particular, the Lichess database provided human historical data for Antichess (27.7M games), Atomic (21.6M games), and Chess960 (20.1M games), the only difference would be the data and the labels for the prediction heads.
[1] Wikipedia. 2024. "Elo Rating System." Wikimedia Foundation. Last modified July 22, 2024.
[2] McIlroy-Young, Reid, et al. "Learning models of individual behavior in chess." KDD 2022.
[3] Chiang, Wei-Lin, et al. "Chatbot arena: An open platform for evaluating llms by human preference." arXiv 2024.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Thank you for your response, I maintain my score at this stage. | Summary: This work explores developing a unified model to predict human moves in Chess. To address the coherence challenges, the authors propose to use skill-aware attention with channel-wise patching to encode skill levels and board positions into a neural network model. Experimental results show their proposed model achieves comparable or better performance in prediction accuracy and coherence compared to SOTA models.
Strengths: This work is overall very solid. The proposed model is clearly presented, with enough details for reproduction. Evaluation is also convincing and thorough.
Weaknesses: A minor problem is about the skill level encoder. The proposed Hermes model encodes both players’ skills, but Maia model only encodes one player’s, which adds an advantage to Hermes model and makes comparisons in human move prediction potentially unfair.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. About the volatile predictions of Maia model, is this problem more due to prediction errors or the volatile nature of human playing data? For example, a move that relies on several subsequent moves to be effective might be a bad choice for middle-level player, but a good choice for both low-level (where the opponent doesn’t know how to counter) and high-level players (where the active player can manage the subsequent changes). I’m not an expert in Chess, just wondering whether this is a possible cause of the incoherent problem.
2. In Figure 10, is Maia-2 actually the proposed Hermes model? This evaluation of win-rate seems indirect. Why not directly pair Hermes-1500 (by setting the active player level to 1500, for example) with Maia-1500 and evaluate the final outcome, to determine which model is better if the goal is to evaluate move quality?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: No significant limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Skill level encoder (W1)**
Maia implicitly encodes both players’ skill levels by only selecting the games between the same-strength players for model training. In Table 1, we have equated the active and opponent skill levels to ensure fair comparisons, and our model outperforms Maia in this setting. Note that Maia is restricted to training on games between players at the same skill level, which is a significant limitation on the distribution of data it considers.
Not only does Hermes outperform Maia in Maia’s setting, but our unified model is also capable of properly modeling situations where the player skill levels are different. This allows for better flexibility and improved performance on both matched and unmatched skill-level settings (Figure 2). In addition, having both ratings can help derive insights into human behavior. For example, improving the players' skills themselves affects human decisions more than the varied opponent skill levels (Figure 3B).
**Maia’s Incoherence (Q1)**
We strongly suspect that Maia’s incoherence is much more due to noise than to signal. While it’s possible that some positions may have non-monotonic relationships with the probability of choosing the “best” move, it is highly unlikely that the true relationships are as chaotic as Maia’s predictions (which often change direction 4–5 times throughout the 9-step skill range).
**Figure 10 (Q2)**
Yes, it is Hermes, which we already modified. Thank you for pointing out this typo which we will fix. We didn’t aim to evaluate the predicted move quality of Hermes or Maia because our goal is to replicate human moves instead of maximizing quality. Figure 10 shows the human move prediction accuracy conditioned on move quality: for better or worse moves, how well can we predict? The higher the win-rate loss, the lower the move quality. Therefore, we find that human move prediction models are good at higher-quality moves, which are more certain, and they get worse at lower-quality moves, which are more random and thus hard to predict. Our unified model Hermes still outperforms the Maia under such settings.
---
Rebuttal Comment 1.1:
Comment: Thanks for responses. As other reviewers have noted, there are some areas where the method could be clarified, along with some corrections needed for typos and figures.
Despite these minor revisions, I firmly believe that this work surpasses the acceptance threshold. Their focus on coherence in human behavior introduces an interesting research topic. The proposed unified model can be easily applied to modeling human behavior in other domains. This work has significant potential to inspire future research, and I do not find it limited by concerns regarding novelty, contribution or generalization (as mentioned by other reviewers).
I will maintain my current evaluation.
---
Reply to Comment 1.1.1:
Comment: Thank you for your support. | Summary: The paper proposes a unified modeling approach named Hermes for aligning human and AI behaviors in chess. It addresses the limitations of previous models by integrating a skill-aware attention mechanism that dynamically adapts to various skill levels of players. The Hermes model aims to enhance AI-guided teaching tools and provide deeper insights into human decision-making in chess. The model is evaluated based on move prediction accuracy and coherence, showing improvements over existing models.
Strengths: The paper presents an approach to human-AI alignment in chess through the introduction of a skill-aware attention mechanism that dynamically adapts to players’ skill levels. This technique addresses the non-linear nature of human learning and significantly improves the coherence of AI models across different skill levels. The paper is well-structured and clearly written. The evaluation of the Hermes model is thorough, demonstrating notable improvements in move prediction accuracy and coherence over existing models like Maia and traditional chess engines such as Stockfish and AlphaZero.
Weaknesses: Despite its strengths, the paper has several limitations. While the skill-aware attention mechanism seems "innovation", the overall novelty of the paper is somewhat limited as it builds on existing models and techniques without introducing fundamentally new concepts in AI or chess modeling. Additionally, the paper does not adequately address potential biases introduced by relying heavily on data from a specific online platform, which may affect the generalizability of the model. The experiments mainly report move prediction accuracy and move prediction coherence, which are not sufficient to support the model’s practical effectiveness in gameplay scenarios. Most importantly, the paper lacks human-AI experiments that would substantiate the claims of alignment.
Thus, I find the method lacks novelty and the experiments are insufficient to support the claims, leading me to recommend the paper for rejection.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
The font in figures is too thin.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper discussed the limitations in appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Contribution**
To the best of our knowledge, we are the first to emphasize coherence in human behavior modeling. It is important to reconsider the assertion that there are "no fundamentally new concepts," as our work introduces the first human behavior model that is coherent across various skill levels and even improves upon the state-of-the-art move-matching accuracy.
The primary methodological contribution lies in the unified modeling approach for coherent human behavior modeling that is enabled by this specifically designed model architecture. While we believe that the architecture has conceptual benefits that help make these contributions possible, and we will include the rationale behind the architecture in the revisions, we do not claim that the architecture is optimal in any sense; we view the specifics of the architecture as secondary to the advances, i.e., move prediction coherence and accuracy, provided by the unified modeling approach. We will also mention other architecture choices as promising avenues for future work.
**Generalization to other platforms**
Given the universal and objective nature of chess rules and strategies, it is unlikely that human behaviors in chess are biased from one platform to another. For example, given a recorded chess game, we strongly believe it would be extremely difficult to predict whether it was played on Lichess or some other platform. This observation mitigates the concerns of Hermes’s generalizability towards other platforms.
**Human Studies**
In an ideal human experiment, we would give a position to a human at a particular rating, and compare their chosen move to our model output. Our experiments do exactly this, and thus we view them as massive human studies that measure the move-matching accuracy and coherence with the recorded behaviors of real humans.
In addition to this main experiment, we've also performed additional experiments that address other dimensions of your question. In particular, we've implemented a randomized experiment on Lichess: human players challenge our bots, and we randomize whether players play against Maia or Hermes. Our final result is that our higher move-matching and our vastly improved coherence, across all skill levels, come at no cost to human subject engagement, and in fact slightly increase engagement: Hermes even seems to be slightly more engaging: players rematch Hermes after the first game 1.5 percentage points more than Maia (40.6% vs. 39.1%). Although engagement is not our main objective (move-matching and coherence are) this is further promising evidence that we have achieved our goal of a human-aligned AI model that coherently captures human style across different skill levels.
**Typos/Minors**
Thanks, we will modify them.
---
Rebuttal Comment 1.1:
Title: About contribution
Comment: Thanks for your detailed replies.
In your reply, you claim that you are "the first to emphasize coherence in human behavior modeling" and "first human behavior model that is coherent across various skill levels and even improves upon the state-of-the-art move-matching accuracy". What are the differences between your paper and the paper titled "Aligning Superhuman AI with Human Behavior: Chess as a Model System" [1]? They claimed that "...a **customized version of AlphaZero** trained on human chess games, that **predicts human moves** at a much higher accuracy than existing engines, and can **achieve maximum accuracy when predicting decisions made by players at a specific skill level in a tuneable way**."
**Thus, from my limited knowledge and your reply, I don't think your paper is "the first". I will maintain my score as "Reject".**
[1] https://www.cs.toronto.edu/~ashton/pubs/maia-kdd2020.pdf
---
Reply to Comment 1.1.1:
Comment: Thank you for engaging with our rebuttal. The major difference between our work and the paper you mentioned ([1]) is coherence. Maia [1] is a set of independent models, one for each skill level, that each independently achieve a respectable accuracy on human move-matching at their targeted skill level. Hermes (our model), in contrast, is a single unified model that accurately predicts moves at all skill levels in a **coherent way**. The problem with Maia's approach is that its predictions are incoherent—its predicted moves in the same position p are unrelated to each other. It may (and often does) predict that, say, only 1100, 1400, and 1800 rated players will play the right move in position p, whereas 1200, 1300, 1500, 1600, 1700, and 1900 rated players will play the wrong move in position p. This runs counter to how people actually improve: as people progress up the skill levels, they learn concepts, and once they start playing the right move in position p they will tend to keep playing it as they get better. (There may be cases where the relationship between skill and correctness is non-monotonic, but anecdotally these are rare—much rarer than Maia predicts non-monotonicity.) Hermes, on the other hand, makes much more coherent predictions. We directly compare the coherence of Maia and Hermes's predictions in Table 4. Maia only treats around 1.5% of positions monotonically, but Hermes treats around 27% of the same positions monotonically, a huge improvement. These results, combined with the fact that [1] makes no mention of coherence, is what support the claims you quoted: "the first to emphasize coherence in human behavior modeling" and "first human behavior model that is coherent across various skill levels and even improves upon the state-of-the-art move-matching accuracy". | Summary: This work tackles the problem of modeling chess agents at varying skill levels. Prior work learns separate models for each skill level, so the authors introduce their method “Hermes” which uses skill-aware attention to adapt the predictions based on the skills i.e. chess ratings of both players in the game. Technically, Hermes uses a categorical embedding for different buckets of player ratings, and uses multi-head attention to project the game state to the player move, some auxiliary game information, and a value head. The active and opponent skill embeddings are projected and added to the query matrix in the attention step, to make the model skill-aware. The authors train and evaluate the model on large datasets of chess gameplay across varying ratings, and conduct in-depth behavior analysis of their models compared to the baselines. They demonstrate that Hermes slightly outperforms the baselines with higher confidence predictions. Further, it enables additional capabilities such as modeling chess players over the entire spectrum of skill levels, and monotonic improvement of predicted moves as the skill increases.
Strengths: 1. The paper tackles an important problem of skill-aware modeling of human behavior. The authors instantiate this in the setting of chess games between players of varying skill levels. The intuition of the overall method is well understood, the framework and the parameters are clearly mentioned, and the results are shown over the standard datasets and compared to recent baselines.
2. The paper is well written. In the paper, the processing steps for the dataset, including filtering and balancing are explained clearly. The authors do not discard rare games between players at very different chess ratings. Is using these rare games helpful for the model? It would be nice to see some ablation showing the effect of this additional data on model performance.
3. Hermes uses auxiliary information prediction to align the model better. This is very intuitive as forcing the model to predict auxiliary information from the current state should improve its understanding of the game concepts. I am curious to know if this has been explored in the past work. Else, it would be interesting to see if this could improve the performance of baseline agents
4. The paper presents an in-depth analysis of the performance and predictions of Hermes under different conditions. It shows that Hermes is more confident in its predictions than the baseline that learns individual models at individual skill levels. Further, Hermes performs similarly across all pairs of skills showing that it captures and models the players' ratings over the entire spectrum, while the baseline is accurate only around the particular skill level they are trained on. Finally, they also show that the model implicitly differentiates in its predictions on the moves that are skill-dependent and independent.
Weaknesses: 1. The paper focuses on improving the predictive model performance for chess games, but the improvement over the baseline is very small (2%). Further, the table doesn't include any error ranges and has no statistical testing to show if the performance improvement is statistically significant.
2. The paper does not cite some recent work on modeling agents with varying skill levels [1]. I would recommend the authors further include and cite works on modeling diverse agents and partners, specifically in multi-agent learning literature.
3. In Hermes, the players’ skill embeddings are projected and added to the query matrix in the attention step, to make the model skill-aware. However, there are no experiments or intuition to explain the significance of this architectural choice. Over simple conditional networks (or more recent works such as FiLM [2]), it would be nice to see what is the significance of this particular attention mechanism.
4. Some corrections regarding the manuscript: Figure 3 refers to an Errors (middle) figure which is missing. I believe there is a typing error in Figure 5, where the top row has Hermes labels which should be Mia.
[1] Jacob, Athul Paul et al. “Modeling Boundedly Rational Agents with Latent Inference Budgets.” ArXiv abs/2312.04030 (2023): n. Pag.
[2] Perez, Ethan et al. “FiLM: Visual Reasoning with a General Conditioning Layer.” AAAI Conference on Artificial Intelligence (2017).
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. How does the granularity of skill clusters affect the model's performance?
2. What is the training and test split over the dataset?
3. Could the model interpolate between varying skill levels? I would be curious to see what would be its performance on unseen skill levels.
4. Does the Q-Q plot for Value head prediction only include accurate predictions?
5. Is there a qualitative analysis of where the improvement in Hermes over the baseline comes from? Does it improve all skill levels or is it restricted to a low number of buckets?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: 1. The paper focuses primarily on modeling chess agents, so, a discussion on the broader impact of this method would be valuable. Here, the model assumes access to the ground truth skill embedding of the player. However, It would be interesting to see if it is possible to extract the skill embedding from the past moves i.e. implicit modeling of player/opponent skill.
2. The model's accuracy is around 54%, which is still pretty low for the model to be deployed. The evaluation metrics are limited to the model's prediction accuracy. A human study or evaluation against skill-specific agents could further demonstrate Hermes' effectiveness in modeling skills at all levels.
4. The paper has limited citations to past work on modeling diverse agents, so it is difficult to assess the novelty and the impact of the contribution. Further, one of the main contributions is the skill-aware module, so I would suggest the authors to include some baselines or discussion to explain the intuition and benefits of using this mechanism over other simpler baselines.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Main focuses & Broader Impact (L1, L2, W1)**
We contribute an approach to human move prediction that is not only the new state of the art for accuracy, but our model achieves **coherence** in its predictions. To power algorithmic teaching tools, we believe that it is not enough to treat different skill levels independently and make predictions that don’t make coherent sense. Instead, we need coherent move predictions to algorithmically capture the trajectory of human ability as we progress from beginner mistakes to expert decisions. This way, we enable the building of systems that guide people along efficient learning paths. To accomplish this, we design a skill-aware attention mechanism for the **unified modeling** of human behavior across various skill levels, instead of modeling each skill level independently as previous methods did.
**Move prediction accuracy (W1, Q5, L2)**
Thank you for pointing out that we didn’t sufficiently explain the significance of our performance gains. In the human move prediction problem for amateur players, the ceiling accuracy is far below 100% given the randomness and diversity of their decisions—even the same player won’t always make the same decision when faced with the same position. Our 2 percentage point gain is substantial considering that the difference between Maia and Leela, the previous state-of-the-art model for this task and a traditional chess engine not trained for this task at all, is only 6 percentage points. We will update the paper to make this clearer.
In response to Q5, Hermes demonstrates a performance improvement over Maia across virtually all combinations of the active player’s and the opponent’s skill levels (see Figure 2).
In response to W1, like other large models such as LLMs, it’s computationally infeasible to run the model repeatedly with different data splits due to the massive volume of data involved (9.1B positions). Nonetheless, besides discrete move matching accuracy, we also adopt a continuous and thus more stable metric, perplexity, in Table 2, which shows more significant improvement.
**Skill-aware Attention (W3, L3)**
We agree that we didn’t sufficiently explain our architecture choice. The rationale behind our design is that each channel (feature map) of the ResNet output represents different aspects of a chess position, and the attention blocks actively select and interact with the features according to the given skill level. Evidence can be found in Figure 4: for skill-dependent concepts, the representation before attention blocks understands them uniformly across all skill levels with high accuracy, whereas after the attention blocks the representation is attenuated with higher skill levels understanding the concepts better than lower skill levels. This shows that attention blocks effectively “pretend” not to know the concepts to model the degree of imperfection in human moves, whereas skill-independent concepts are understood similarly by the representations before and after the attention block. Table 3 “w/o Att” shows that simply concatenating the skill embeddings with the flattened ResNet outputs did not work well. Thus, a more sophisticated way of conditioning is needed.
The primary methodological contribution lies in the **unified modeling** approach for **coherent** human behavior modeling that is enabled by this specifically designed model architecture. While we believe that the architecture has conceptual benefits that help make these contributions possible, and we will include the rationale behind the architecture in the revisions, we do not claim that the architecture is optimal in any sense; we view the specifics of the architecture as secondary to the advances, i.e., move prediction coherence and accuracy, provided by the unified modeling approach. We will also mention other architecture choices such as FiLM as promising avenues for future work.
**Human studies (L2)**
In an ideal human experiment, we would give a position to a human at a particular rating, and compare their chosen move to our model output. Our experiments do exactly this, and thus we view them as massive human studies that measure the move-matching accuracy and coherence with the recorded behaviors of real humans.
In addition to this main experiment, we've also performed additional experiments that address other dimensions of your question. In particular, we've implemented a randomized experiment on Lichess: human players challenge our bots, and we randomize whether players play against Maia or Hermes. Our final result is that our higher move-matching and our vastly improved coherence, across all skill levels, come at no cost to human subject engagement, and in fact slightly increase engagement: Hermes even seems to be slightly more engaging: players rematch Hermes after the first game 1.5 percentage points more than Maia (40.6% vs. 39.1%). Although engagement is not our main objective (move-matching and coherence are) this is further promising evidence that we have achieved our goal of a human-aligned AI model that coherently captures human style across different skill levels.
**Please see the comments for the rest of our response.**
---
Rebuttal 2:
Title: Rebuttal (Continued)
Comment: **Skill level modeling (Q1, Q3, L1)**
**Interpolation (Q3):**
Interpolation between skill levels within our range is impossible since we already cover all the involved ratings: the “1100” rating bucket contains games played by players with ratings 1100–1900, and the “1200” bucket consists of games played by 1200–1299 rated players, etc.
**Granularity (Q1):**
We group players by ranges of 100 ratings to balance the natural volatility of ratings, the data availability within each rating range, and the practical meanings of ratings (e.g. the difference between 1200 and 1205 is not humanly perceptible). Also, this is consistent with prior work, enabling us to make direct comparisons.
**Automation (L1):**
Extracting skill embeddings from past moves is another research question, which gives individual skill embeddings instead of grouped skill embeddings. The performance can be promising if sufficient past moves are provided [1]. We see this as a promising direction for future work, and will add it to the paper.
**Multi-agent systems (W2, L3)**
We treat this setting as effectively single-agent, where the model is tasked with predicting a human move without interacting with any other agents. And since there is only one chess agent, the common or conflicting goals of agents are not defined. Therefore, this does not follow the definition of multi-agent systems: multiple decision-making agents that interact in a shared environment to achieve common or conflicting goals.
Although Hermes itself is not designed as a multi-agent system, it could be extended to multi-agent systems in the broader context of studying interactions between human-like agents to achieve specific goals, in particular competitions among agents. We will add more discussions on multi-agent learning literature and make it clear in revisions, especially the ones related to human chess plays like Section 6 in the mentioned paper.
**Dataset (Q2)**
Please refer to the front of Section 4 Table 6-10 for details. To ensure a fair comparison with Maia, which is tested on Dec 2019 data and trained on the rest before Dec 2019, we trained a version of Hermes on only the data that Maia was trained on for a perfectly fair comparison. However, we also have more data now to train on. Therefore, we use games played in Dec 2019 and Dec 2023 for testing and the rest before Dec 2023 for training the full Hermes model.
**Q-Q plot (Q4)**
The positions used for plotting are **not** selected based on the correctness of the policy head predictions.
**Typos/Minors (W4)**
Thanks, we will modify them.
[1] McIlroy-Young, Reid, et al. "Learning models of individual behavior in chess." KDD 2022.
---
Rebuttal Comment 2.1:
Comment: I thank the authors for their detailed responses. From the responses and comments, I understand that the main contribution of this work is coherence, however, it is still unclear how Hermes ensures better coherence than the baselines. It would be great if the authors could provide some intuition about that. It seems that the coherence trend is observed post-training and so, is not necessarily the main motivation behind the model. Or is there something specific to the unified modeling that I am misunderstanding here?
Given the current presentation of the contributions, I believe this paper needs more analysis and explanation to present a clear picture of the method and its benefits. Thus, I would maintain my score.
---
Reply to Comment 2.1.1:
Comment: Thank you for your comment. Our response addresses two main points:
1. Coherence is the central motivation of our work, not just a post-hoc observed outcome.
2. Hermes ensures better coherence than the baselines through its unified, parameter-sharing approach.
**Coherence as central motivation**
We emphasize that coherence is not only observed post-training--it is the central motivation of our work. The third sentence of the Abstract states "Critical to achieving this goal, however, is coherently modeling human behavior at various skill levels." (L5). The central limitation of previous work, and the motivation for our present work, articulated in the Introduction is Maia's lack of coherence (paragraph starting on L48: "Maia models players at different skill levels completely independently...Viewed as a whole, [Maia's predictions] are volatile", "In order to serve as algorithmic teachers or learning aids, our models of human behavior must be coherent"). The first sentence of the Discussion summarizes our contribution as "Hermes is a unified model architecture that can accurately and coherently capture human decision-making in chess across a broad spectrum of skill levels." (L383). We hope that situating coherence as the central idea of our paper in the Abstract, Introduction, and Discussion makes it sufficiently clear that coherence is what we are aiming for, instead of something we stumbled upon, but we would also be happy to implement any suggestions you have in order to make this point clearer.
**Architecture intuition**
As for why Hermes ensures better coherence than the baselines, we are happy to provide some intuition (which we will certainly incorporate into our revision to make sure this is as clear as possible). In one sentence, Hermes uses a **unified, parameter-sharing modeling approach** instead of Maia’s independent parameters as a way to regularize across skill levels.
To explain more fully, the root cause of Maia's lack of coherence is that Maia models learn 9 sets of independent parameters for each of the 9 skill levels. This means that there is no mechanism to encourage or enforce consistency (i.e. coherence) across skill levels. Maia 1400 and Maia 1500, for example, are distinct models with completely separate training data and zero parameter overlap. As a result, Maia often outputs dramatically different predictions for neighboring skill levels on the same position, which leads to a lack of coherence. In contrast, in Hermes we learn a unified set of parameters to predict human decisions conditioned on skill level. This ensures that the conditional prediction will always be based on the shared knowledge in the one and only parameter space that we learn---therefore decisions made by 1500-rated players are partially informed by what 1400-rated and 1600-rated (etc.) players do. In other words, Hermes is implicitly regularized by the shared parameters across all skill levels, without over-optimizing towards any particular skill level. Neighboring skill levels will yield similar predictions, e.g., P(y|position, skill level_{i}) \approx P(y|position, skill level_{i+1}), unless Hermes recognizes some condition to switch to another prediction is satisfied. This unified modeling approach ensures **coherence by design**.
In addition, our skill-aware attention module enables Hermes to learn non-trivial interactions between positions and skill levels. The skill-aware attention module plays a crucial role in maintaining coherence. Whereas the various Maia models learn different representations of the position for each skill level, Hermes first learns the same unified representation that it uses for all skill levels, and then the skill-aware attention module learns how different skill levels interact with the position to produce a move. By learning a unified representation of the position first, and then adjusting based on skill level, Hermes ensures that all skill levels are informed by a consistent understanding of the position. This decomposition---learning position representation separately from skill-level interaction---naturally encourages coherence across skill levels.
**Summary**
In summary, our central motivation is to develop a model capable of coherent and accurate human move prediction. Our unified modeling approach is deliberately chosen to solve Maia's parameter independence problem, and our skill-level attention module is specifically designed to maintain a shared position representation across all skill levels while better modeling the interactions between position and skill level. We hope this addresses your concern.
If this explanation clarifies that coherence is our central motivation and why Hermes's design encourages coherence, along with the other points we addressed in our first response, would you be open to considering raising your score?
---
Rebuttal 3:
Comment: I would like to thank the authors for their explanation. While the unified modeling approach is a sensible (but not very strong) intuition for coherence, I am curious about two points:
1. Intuitively, conditional models can still learn independent functions in an over-parameterized setting, so if the authors could provide some details (model size, training set size, etc..) to show that this does not hold under their setting. This would provide more evidence for the claim that parameter sharing and the information bottleneck induced by this approach, is the key element that unlocks coherence.
2. P(y|position, skill level_{i}) \approx P(y|position, skill level_{i+1}) - I believe this a very strong claim, assuming that the learned network is Lipschitz continuous. Is there any theoretical evidence for that? Or is there any previous work that points to this for parameter sharing + attention-based methods?
Again, thanks for the responses. I am still open to increasing the scores and would take the additional information into account during the AC-reviewer discussion phase as well.
---
Rebuttal Comment 3.1:
Comment: Thank you for your continued engagement in the rebuttal phase!
Regarding point 1, Hermes has 23.3M parameters and was on trained on 9.1B training positions, which is not the over-parameterized setting [1], where the number of parameters has to significantly exceed the number of training samples. Evidence that Hermes is indeed a conditional model can be found in Figures 2, 3 (B), and 4 in the paper, and Figures 5, 6, and 7 in the Appendix, and the text accompanying these Figures.
[1] Allen-Zhu, Zeyuan, Yuanzhi Li, and Yingyu Liang. "Learning and generalization in overparameterized neural networks, going beyond two layers." Advances in neural information processing systems 32 (2019).
Regarding point 2, it’s important to clarify that Hermes is deliberately designed to encourage coherence across skill levels without rigidly enforcing it. Our objective is not to impose coherence as a hard constraint, which might obscure legitimate differences in player behavior between skill levels, but to create a model architecture that naturally encourages coherence where the data supports it. (The informality of our previous explanation was intended to address your question in the response to our rebuttal, asking for intuition on how Hermes ensures better coherence than the baselines, but it is also appropriate given our goal of designing a model that encourages coherence when supported by the data but remains flexible enough to avoid enforcing it where it doesn’t belong.)
Hermes achieves this through a unified, parameter-sharing approach combined with a skill-aware attention mechanism. By sharing parameters across skill levels and allowing the model to adjust based on skill-level input, we induce a form of regularization that naturally enforces smoother transitions between predictions at adjacent skill levels. Importanty, this regularization is soft---it doesn’t impose artificial coherence where it doesn’t naturally exist in the data.
The principle of parameter sharing has been well-documented in other contexts, such as multi-task learning and transfer learning, where it has been shown to promote smoother, more coherent outputs across related tasks. You can view Hermes predicting slightly different rating levels as a multi-task learning problem where the tasks are very similar. When tasks (or in our case, skill levels) share underlying structures, parameter sharing allows the model to generalize knowledge effectively, resulting in more coherent outputs across these tasks. This principle is well-supported in the literature and is directly applicable to the challenge of modeling adjacent skill levels in chess.
Moreover, the skill-aware attention mechanism in Hermes allows the model to adapt its focus based on skill level, ensuring that while the underlying position representation is consistent, the nuances of how different skill levels interact with that position are captured appropriately. This mechanism plays a critical role in maintaining coherence without compromising the model’s ability to capture genuine differences across skill levels.
In contrast, Maia learns totally independent parameter sets, and treats the problem of predicting moves made by 1400 and 1500-rated players as completely distinct. Given infinite data Maia may also learn coherent predictions, but even given massive training data it fails to cohere anywhere near as well as Hermes. Our empirical results show that unifying the prediction tasks and using skill-aware attention has major practical benefits in achieving coherence.
In our revised manuscript, we would be happy to clarify these points further and better explain that our design choices are intended to encourage, though not necessitate, smooth and coherent behavior across skill levels. | Rebuttal 1:
Rebuttal: Thank you for your thoughtful reviews and constructive suggestions for our work. Our work has been recognized as addressing "an important problem of skill-aware modeling of human behavior" (Reviewer 1), introducing an "innovative skill-aware attention mechanism” (Reviewer 2), and being "very solid" with "convincing and thorough" evaluation (Reviewer 3). Reviewer 4 appreciates our "simple yet effective" methodology, yielding "remarkable results in various settings."
Here is a brief overview of the main concerns addressed.
**Main focuses & Broader Impact (R1, R2, R4)**
We contribute an approach to human move prediction that is not only the new state of the art for accuracy, but our model achieves **coherence** in its predictions. To power algorithmic teaching tools, we believe that it is not enough to treat different skill levels independently and make predictions that don’t make coherent sense. Instead, we need coherent move predictions to algorithmically capture the trajectory of human ability as we progress from beginner mistakes to expert decisions. This way, we enable the building of systems that can guide people along efficient learning paths. To accomplish this, we design a skill-aware attention mechanism for the **unified modeling** of human behavior across various skill levels, instead of modeling each skill level independently as previous methods did.
**Design choices (R1, R3, R4)**
We agree that we didn’t sufficiently explain our architecture choice. The rationale behind our design is that each channel (feature map) of the ResNet output represents different aspects of a chess position, and the attention blocks actively select and interact with the features according to the given skill level. Evidence can be found in Figure 4: for skill-dependent concepts, the representation before attention blocks understands them uniformly across all skill levels with high accuracy, whereas after the attention blocks the representation is attenuated with higher skill levels understanding the concepts better than lower skill levels. This shows that attention blocks effectively “pretend” not to know the concepts to model the degree of imperfection in human moves, whereas skill-independent concepts are understood similarly by the representations before and after the attention block. Table 3 “w/o Att” shows that simply concatenating the skill embeddings with the flattened ResNet outputs did not work well. Thus, a more sophisticated way of conditioning is needed.
Our primary methodological contribution lies in the **unified modeling** approach for **coherent** human behavior modeling that is enabled by this specifically designed model architecture. While we believe that the architecture has conceptual benefits that help make these contributions possible, and we will include the rationale behind the architecture in the revisions, we do not claim that the architecture is optimal in any sense; we view the specifics of the architecture as secondary to the advances, i.e., move prediction coherence and accuracy, provided by the unified modeling approach. We will also mention other architecture choices as promising avenues for future work.
**Human studies (R1, R2)**
In an ideal human experiment, we would give a position to a human at a particular rating, and compare their chosen move to our model output. Our experiments do exactly this, and thus we view them as massive human studies that measure the move-matching accuracy and coherence with the recorded behaviors of real humans.
In addition to this main experiment, we've also performed additional experiments that address other dimensions of this question. In particular, we've implemented a randomized experiment on Lichess: human players challenge our bots, and we randomize whether players play against Maia or Hermes. Our final result is that our higher move-matching and our vastly improved coherence, across all skill levels, come at no cost to human subject engagement, and in fact slightly increase engagement: players rematch Hermes (the new system in this paper) after the first game 1.5 percentage points more than Maia (40.6% vs. 39.1%). Although engagement is not our main objective (move-matching and coherence are) this is further promising evidence that we have achieved our goal of a human-aligned AI model that coherently captures human style across different skill levels.
More detailed responses are provided to individual reviews with pointers to **Limitations (L), Weaknesses (W), and Questions (Q)**.
We hope these clarifications have addressed your concerns and strengthened our paper. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Optimistic Critic Reconstruction and Constrained Fine-Tuning for General Offline-to-Online RL | Accept (poster) | Summary: The paper proposes a general offline-to-online (O2O) reinforcement learning method that can work with any offline RL algorithm. It addresses evaluation and improvement mismatches between offline datasets and online environments by (1) Re-evaluating the offline critic optimistically; (2) Calibrating the critic with the offline actor; (3) Performing constrained online fine-tuning. This approach shows stable and efficient performance improvements across various simulated tasks compared to existing methods.
Strengths: 1. This paper is well-written and easy to follow.
2/ The re-evaluation and calibration procedures are indeed very crucial for improving offline-to-online RL training, the authors make a great effort to show the importance of such procedure, with both empirical evidence and theoretical guarantees.
Weaknesses: There are several minor presentation issues, not very critical, but they significantly affect the visual quality of the paper.
1. In line 86, equation 2, the subscription under the two expectations is getting too close, making the formula a bit messy.
2. In line 132, equation 9, the parenthesis of the first $f$ should be larger, so that it covers the inputs.
3. In figure 3, the colors of some curves are very similar, making it hard to tell the performance of each method.
Technical Quality: 4
Clarity: 3
Questions for Authors: N/A
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks you for the high praise and the comprehensive review of our paper.
**Q1:** There are several minor presentation issues, not very critical, but they significantly affect the visual quality of the paper.
**A1:** Thanks for your suggestions. We have revised these language mistakes. We will proofread the paper and revise writing issues carefully.
Thank you for your review again and we appreciate your suggestions for a better presentation. We are always willing to answer any of your further concerns.
---
Rebuttal Comment 1.1:
Comment: Thank you for the feedback, I have no further questions and I would like to keep my rating. | Summary: The paper addresses the offline-to-online (O2O) reinforcement learning problem with the goal of improving online performance by leveraging offline data. The primary contributions of this paper are twofold. First, it identifies and elaborates on two key challenges in O2O RL: evaluation and improvement mismatches, which differentiate offline and online RL. Second, it introduces a general method for transferring knowledge from any offline approach to three representative online methods. Both theoretical analysis and empirical experiments thoroughly validate the effectiveness of the proposed method.
Strengths: Significance: In contrast to existing works, this paper is the first to summarize two mismatches between offline and online methods. These mismatches, which relate to two types of offline approaches, reveal their negative effects on subsequent online fine-tuning.
Contribution: The proposed method effectively balances specificity and flexibility. To address the two distinct mismatches, the paper introduces policy re-evaluation and value alignment techniques, which yield optimistic Q-value estimates and accurate Q-value calibration, respectively. Furthermore, the method is applicable to a broad range of representative online methods, demonstrating its wide applicability.
Soundness: The experiments are thorough and strongly support the claims of the paper, including the motivation and effectiveness. Compared to multiple SOTA methods, the proposed method shows superiority.
Weaknesses: The method consists of three components. The first two components are developed to solve two mismatches. How about the third component? Is there another key issue in O2O learning tasks, such as another mismatch?
In value alignment, regarding different online methods, the paper develops different strategies. Does this imply that the method lacks generalizability?
Technical Quality: 3
Clarity: 3
Questions for Authors: Although the method appears straightforward, it comprises three components. What is its time complexity? The paper is suggested to include an analysis on time complexity.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper does not have a Limitation section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your high praise and comprehensive review of our paper. We appreciate the questions you raised and are committed to delivering a comprehensive response to address the issues.
**Q1:** The method consists of three components. The first two components are developed to solve two mismatches. How about the third component? Is there another key issue in O2O learning tasks, such as another mismatch?
**A1:** There is indeed another key issue in O2O RL, known as _distribution shift_ in previous work [1][2][3]. This issue arises from the discrepancy between the offline dataset and the interactive data collected by the offline policy, which can negatively impact performance improvement. This challenge occurs whether the policy is evaluated using suboptimal offline data or high-quality but limited data. Although we strive to maintain an optimistic property for the critic and align the critic with the actor, achieving stable online fine-tuning remains challenging due to the inevitable _distribution shift_.
Most current offline algorithms focus on avoiding OOD actions and training a reliable policy on the states present in the dataset. However, due to the inherent optimism in online RL, encountering OOD states and actions is unavoidable, leading to performance fluctuations. This is particularly problematic in critical scenarios, especially high-risk ones. For OOD states, even if the policy is well-trained during the offline phase, it may still fail to produce favorable actions, potentially causing erroneous policy updates.
To conclude, while optimistic critic reconstruction can guarantee stable and efficient performance improvement initially, it is essential to implement constrained fine-tuning in later stages to maintain continued stability.
**Q2:** In value alignment, regarding different online methods, the paper develops different strategies. Does this imply that the method lacks generalizability?
**A2:** It is important to note that the different strategies correspond to different policy types and update mechanisms, as we study the problems of O2O settings from the perspective of online RL. In online RL, the alignment ways between the critic and the actor can vary significantly due to different foundational designs.
O2SAC is designed for stochastic policies updated in an off-policy manner; O2TD3 is for deterministic policies updated in an off-policy manner; and O2PPO is for stochastic policies updated in an on-policy manner. These three methods are representative and cover the major categories of existing mainstream online algorithms, making it straightforward to incorporate other advanced techniques. Thus, rather than indicating a lack of generalizability, the developed strategies demonstrate our method's adaptability to a wide range of policy types and update mechanisms in online RL.
**Q3:** Although the method appears straightforward, it comprises three components. What is its time complexity? The paper is suggested to include an analysis on time complexity.
**A3:** In policy re-evaluation, since the policy is fixed and the re-evaluation of the critic is straightforward, the computational cost of re-evaluation is significantly lower than that of offline learning. The time cost of value alignment is somewhat higher but still less than that of the offline phase. In fact, the time cost is approximately proportional to the offline phase according to the alignment steps, since both the actor and critic are updated in the value alignment phase.
We conducted an experiment on the _hopper-medium-v2_ environment using the O2SAC method and listed the time cost in different phases as follows.
| Trainning Phase | Offline(CQL)| Policy Re-evaluation | Value alignment|
| ----------- | ----------- | ----------- | ----------- |
| Trainning Steps | 1M | 0.5M | 0.5M |
| Time Cost | 5.4h | 0.95h | 2.0h |
However, it is worth noting that although we set the training steps for value alignment at 500k, in some environments, only a few alignment steps are needed to calibrate the critic with the offline actor, as shown in Fig. 1 and Fig. 11. Only in _antmaze_ environments, where it is hard for the critic to capture the sparse reward signal, more alignment steps are necessary. Additionally, in constrained fine-tuning, since only the lagrangian multiplier is added to be updated and the interaction cost dominates, the time cost increases very little.
Moreover, in O2O RL, we are typically not concerned about the time cost in the offline process, as different offline methods take different amounts of time. Instead, we prioritize the cost of interactions during online fine-tuning. Our method re-evaluates and aligns the critic with the offline actor solely within the offline dataset, making the time cost less critical.
While the time cost is not the main concern, we will include a detailed analysis of the time complexity in the revised paper to provide a clearer understanding of the computational requirements for each component.
[1] Lee et al. Offline-to-online reinforcement learning via balanced replay and pessimistic q-ensemble, CoRL 2022.
[2] Yu et al. Actor-critic alignment for offline-to-online reinforcement learning, ICML 2023.
[3] Nakamoto et al. Cal-ql: Calibrated offline rl pre-training for efficient online
fine-tuning, NeurIPS 2023.
Thank you for your constructive review again. We hope we have resolved your concerns. We are always willing to answer any of your further concerns. | Summary: This paper proposes to handle evaluation and improvement mismatches in offline-to-online RL. To this end, the authors suggest to re-evaluate the pessimistic critic and calibrate the misaligned critic with the reliable offline actor. Then, they perform constrained fine-tuning. They evaluate the performance in the standard offline-to-online RL benchmark.
Strengths: - The framework is theoretically analyzed.
- Experimental evaluation is extensive.
Weaknesses: - Difficult to follow up on technical novelties. I suggest adding concept figures or algorithm tables to highlight core contributions.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Does re-evaluation require heavy computation? Is it more efficient compared to RLPD [1], which initialize replay buffer with offline dataset instead of offline pre-training?
[1] Ball, Philip J., et al. "Efficient online reinforcement learning with offline data." ICML 2023
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See Weaknesses and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your appreciation of our paper. We are glad that you consider our work “theoretical analyzed”. We are glad to answer all your questions.
**Q1:** Difficult to follow up on technical novelties. I suggest adding concept figures or algorithm tables to highlight core contributions.
**A1:** Thanks for your valuable suggestion. To facilitate understanding, I would like to elucidate the techniques simply again. As the analysis in our paper, in O2O scenarios, evaluation and improvement mismatches are common. Recognizing that the offline policy is well-trained and trustworthy, we first utilize FQE to re-evaluate the critic optimistically, as in the online process. Given factors like partial data coverage, we then calibrate the critic with the offline actor to achieve an optimistic and reliable critic. Finally, to address distribution shift, we incorporate CMDP into online fine-tuning for stable and efficient performance improvement. We have included the pseudocodes in Appendix J.
**Q2:** Does re-evaluation require heavy computation? Is it more efficient compared to RLPD [1], which initialize replay buffer with offline dataset instead of offline pre-training?
**A2:** Since the policy is fixed and the re-evaluation of the critic is straightforward, the computational cost of re-evaluation is significantly lower than that of offline learning. In our experiments, the re-evaluation takes about 1 hour for O2SAC (whereas CQL takes more than 5 hours) and 40 minutes for O2TD3. For O2PPO, updating the critic by fitting the returns independently of the policy results in even lower time costs.
Generally, in O2O RL, we are typically not concerned about the time cost in the offline process, as different offline methods take different amounts of time. Instead, we prioritize the cost of interactions during online fine-tuning. Our method re-evaluates and aligns the critic with the offline actor solely within the offline dataset, making the time cost less critical.
Moreover, it is challenging to directly compare our method with RLPD because they address different objectives. RLPD focuses on learning a policy from scratch using a given dataset, whereas our method aims to improve a well-trained policy in O2O scenarios with limited interactions in a stable and efficient manner. Nonetheless, we acknowledge that integrating RLPD techniques could potentially enhance our method's performance, as our approach imposes minimal additional constraints during the online process. This is an interesting future direction of our work.
Thank you for your review again. We hope we have resolved your concerns. We are always willing to answer any of your further concerns. | Summary: The paper proposes a general framework to bridge offline-to-online RL. It first studies two types of issues in O2O RL: evaluation mismatch and improvement mismatch. The proposed method addresses these issues by combining policy re-evaluation, value alignment, and constrained online fine-tuning. Unlike prior methods, the proposed framework can be applied to any offline RL algorithm. In the experiment, the method was built on top of CQL/SAC, TD3+BC, and IQL/PPO, demonstrating its performance in D4RL tasks.
Strengths: 1. The breakdown of the two mismatch problems is interesting.
2. The proposed method is compelling in that it can generally bridge any offline RL and online RL algorithms.
3. The results in D4RL locomotion tasks are solid.
4. Overall, the paper is well-written and contains informative ablation studies.
Weaknesses: 1. [Major] It is a bit confusing to me which offline pre-training methods were used for each result. Is it correct that all the results from O2SAC are pre-trained with CQL, O2TD3 is pre-trained with TD3+BC, and O2PPO is pre-trained with IQL? I suggest mentioning these details more clearly in the paper.
2. [Major] While the paper suggests that the proposed method is universal to the offline RL algorithm, there is not much comparison of using the same online RL algorithm with different offline pre-training methods. Can you provide ablations of running O2SAC on the same task with different offline methods, such as CQL, IQL, and ODT?
3. [Major] While the results for the harder AntMaze tasks (antmaze-medium/large) are briefly mentioned in the appendix, could the authors provide the full comparisons to previous methods and add them to Table 1?
4. [Major] Is it possible to include results on the Adroit binary task, as in [1, 2, 3], which is a common benchmark for studying the sample efficiency of online RL with offline data?
5. [Minor] Figure 5 (a) is a bit confusing to me. Which plot corresponds to the unconstrained fine-tuning mentioned below?
>(l.550) For O2SAC, unconstrained fine-tuning suffers from a performance fluctuation, and direct fine-tuning from offline may lead to faster performance improvement, but drops sharply in the subsequent phase, e.g. in Figure 5(a)
6. [Minor] For the value alignment objective in Eq.13, can the authors explain more on why L_retain is needed? What will happen if we only use L_align as the objective?
7. [Minor] It would be interesting to see if the proposed method can be combined into a recent sample-efficient online RL algorithm that can use a high UTD ratio, such as RLPD [3].
8. [Minor] The legend for Figure 3 (c) seems to be incorrect: O2TD3 -> O2PPO
[1] Nair et al., AWAC: Accelerating Online Reinforcement Learning with Offline Datasets, 2020
[2] Nakamoto et al. Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning, 2023
[3] Ball et al., Efficient Online Reinforcement Learning with Offline Data, 2023
Technical Quality: 3
Clarity: 3
Questions for Authors: The questions are included in the previous section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful review and positive recognition of our paper. We are glad that you consider our work “interesting, solid, well-written”. We appreciate the questions you raised and are committed to delivering a comprehensive response to address the issues.
**Q1:** Suggest mentioning the offline pretrained algorithms used for O2SAC, O2TD3 and O2PPO more clearly in the paper.
**A1:** Thanks for your valuable suggestion. We adopt suitable offline algorithm according to the consistency of the policy form just for simple experiments. For example, both CQL and O2SAC adopt the squashed Gaussian policy. In fact, any offline algorithm can be used for the initialization of our methods with a change of policy form through behavior cloning, as discussed in Appendix I of our paper. And we will mention the offline pretrained algorithms used for O2SAC, O2TD3 and O2PPO more clearly in the paper.
**Q2:** While the paper suggests that the proposed method is universal to the offline RL algorithm, there is not much comparison of using the same online RL algorithm with different offline pre-training methods. Can you provide ablations of running O2SAC on the same task with different offline methods, such as CQL, IQL, and ODT?
**A2:** Thanks for your suggestion. We provide the results in Fig. 1 of the uploaded PDF. The initial performance of O2SAC initialized from ODT is lower than others since the simple behavior cloning (we directly maximize the likelihood of the actions output by offline ODT while keeping an appropriate entropy) could harm the performance, as discussed in Appendix I. But in _hopper-medium-v2_, the performance improves quickly. We analyze that by the constraint, the policy can recover the offline performance (about 97 normalized score of ODT), as the output of the cloned policy is near the ODT policy. We will include these results and discussion in the revised version.
**Q3:** While the results for the harder AntMaze tasks (antmaze-medium/large) are briefly mentioned in the appendix, could the authors provide the full comparisons to previous methods and add them to Table 1?
**A3:** Since some work does not give the hyper-parameters of these tasks and TD3+BC performs extremely poor (almost 0) on these tasks resulting in the terrible initialization for O2TD3, it may be inappropriate to add the comparisons to Table 1. However, we did some comparisons shown in Table 1 of the uploaded PDF. We are willing to take more time to compare with more algorithms like ODT and give a separate table in the revised paper.
**Q4:** Is it possible to include results on the Adroit binary task, as in [1, 2, 3], which is a common benchmark for studying the sample efficiency of online RL with offline data?
**A4:** We tried the experiments on Adroit binary tasks but found some difficulties of getting a favourable offline policy by CQL and TD3+BC since the corresponding hyper-parameters are not given in the papers. We tried the hyper-parameters of antmaze tasks but achieve extremely poor performance (almost 0). And we tried cloning a policy by the offline policy learned by IQL, but the performance of the cloned policy is still poor. We cannot provide the corresponding results at the moment for O2SAC and O2TD3. In Fig. 2 of the uploaded PDF, we show the results of O2PPO initialized by IQL. Since the Adroit binary tasks return the sparse rewards like antmaze, given the effectiveness of antmaze tasks, it is reasonable to think that our methods can be applied to the Adroit binary tasks.
**Q5:** Figure 5 (a) is a bit confusing to me. Which plot corresponds to the unconstrained fine-tuning mentioned below?
**A5:** We apologize for the confusion caused by the mismatched expression. The expression refers to plots in a previous version of our paper, but we did not update the expression after revising the paper. Thanks for pointing this out and we will correct it in the revised paper.
**Q6:** For the value alignment objective in Eq.13, can the authors explain more on why L_retain is needed? What will happen if we only use L_align as the objective?
**A6:** The role of L_align is to suppress the Q-values of OOD actions. At the beginning of value alignment, the target Q-values of OOD actions can be extremely low as $log \pi_{off}(a_{ood}|s)$ can be extremely low, resulting in an overall or even catastrophic underestimation of Q-values, thereby destroying the optimistic property after policy re-evaluation. The role of L_retain is to keep the optimistic property by keeping the Q-values of reliable actions $\dot{a}$, that is necessary for value alignment since we take them as the anchors to calibrate the Q-values.
**Q7:** It would be interesting to see if the proposed method can be combined into a recent sample-efficient online RL algorithm that can use a high UTD ratio, such as RLPD [3].
**A7:** Yes, this is indeed another advantage of our method. Since we only add a constraint that can be considered as part of the reward, the policy iteration process remains consistent with the normal online approach, which makes it feasible to incorporate techniques from advanced efficient RL algorithms. We conducted some experiments using a high UTD ratio of 10 (but still update the lagrangian multiplier once per step) and achieved better performance improvement, as shown in Fig. 3 of the uploaded PDF. We will include these results and discussions in the revised version.
**Q8:** The legend for Figure 3 (c) seems to be incorrect: O2TD3 -> O2PPO
**A8:** Thanks for your careful review. We will correct this mistake in our revised paper.
Thank you for your insightful review again. We hope we have resolved your concerns. We are always willing to answer any of your further concerns.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the additional results and clarifications. Overall, I find my concerns have been addressed. I will increase the score to 6. | Rebuttal 1:
Rebuttal: Thank you to all the reviewers for your thorough evaluation of our paper. Your constructive comments have been invaluable in helping us enhance our work.
Pdf: /pdf/ab828a1400e5467ba6cd8fd5cbb2e410784f3354.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The Collusion of Memory and Nonlinearity in Stochastic Approximation With Constant Stepsize | Accept (spotlight) | Summary: This paper studies constant step-size stochastic approximation algorithms with Markovian noise. It is known that in this case, the error of a stochastic does not vanish asymptotically, even when consider averaging. This is \theta_n has a bias. Previous work have show how to study the bias of such algorithm for linear stochastic approximation. In this paper, the authors tackle the more challenging case of non-linear stochastic approximation. Most of the paper is devoted to proving that the bias is of order O(\alpha). Some discussion on the applicability of results is presented.
Strengths: The paper proposes a characterization of the bias under the challenging setting of Markovian noise plus non-linear drifts.
The analysis is asymptotically tight as the step size \alpha converges to 0, i.e., the expression for the bias is not a bound but an equality plus smaller order terms.
The implications of the results are discussed.
Weaknesses: Most of the assumptions needed to obtain the results are relatively mild but two conditions are quite strong:
1. strong monotonicity (A3)
2. Smoothness (A2)
I think that the paper should discuss in more details these assumption and highlight that they are really limiting the applicability of the result.
The assumption developed in part 4.2 to avoid assuming that the iterates are bounded seems quite strong, not verified in many practical cases (for instance any instance of stochastic gradient descent where the noise has a finite support would not satisfy this assumption), and very artificial (is the only purpose of this assumption to
The paper is extremely long (54 pages including proofs and references). To me, this says that there is either too much content or that the results are too diluted. As a result, the paper is very technical and hard to read. There should be more effort to make the paper more readable. Some suggestions:
- consider a slightly less general setting
- avoid considering sub-cases (like section 4.2) that are a bit orthogoal to the paper
The practical applications of the results are unclear. Some potential applications are presented in Section 4.5 / 4.6 but there are no experiments or simulations to confirm that this actually work (these sections are interesting, though).
There is a lot of papers on the subject. This can be seen as a good sign (this is an active area of research), but at the same time, it is hard for me to really assess the novelty of the results. Note that there are some (recent) papers that seem related to these work and that might be cited. This last point is not a weakness since some of them are extremely recent (available online after the submission deadline):
- Computing the Bias of Constant-step Stochastic Approximation with Markovian Noise. S Allmeier, N Gast (2024)
- Bias in Stochastic Approximation Cannot Be Eliminated With Averaging, Caio Kalil Lauand, Sean P. Meyn (2022)
- Revisiting Step-Size Assumptions in Stochastic Approximation. Caio Kalil Lauand, Sean Meyn (2024)
A comparison with these papers might be useful (even of this is not mandatory for the two 2024 papers).
Technical Quality: 3
Clarity: 3
Questions for Authors: There are some questions / comments in the limit that would deserve some comments from the authors.
The Markovian noise (x_k) is assumed exogeneous (it does not depend on \theta). For some application (like Q-learning with a navigating policy derived from \theta), this would not be satisfied. Could this assumption be lifted?
The next order term is a O(\alpha^{3/2}): is it the sharpest bound or could O(\alpha^2) be obtained?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitation of the assumptions (see "weaknesses") should be better discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing the strengths and contribution of our paper, and for the constructive comments. Below we provide our responses to the comments. In the following, we use [1] [2] etc. to refer to papers cited in our submission, and [a] [b] etc.
for new references, with bibliographic information given at the end of this reply.
**Comment: Discussion on the limitation of Assumption 2 (smoothness) and Assumption 3 (strong convexity/monotonicity).**
We thank the reviewer for raising this point. We refer the reviewer to the **global rebuttal** for additional discussion on the smoothness and strong convexity/monotonicity assumptions.
**Comment: Discussion on Assumption 5 on noise minorization.**
We address this comment in the **global rebuttal**, as it is an important point.
**Comment: paper structure and presentation**
We thank the reviewer for their feedback on the paper's structure. We will incorporate the suggestions in the revision to make our paper more succinct.
**Comment: practical application and numerical experiments.**
We thank the reviewer for the constructive comment. We briefly remark on the practical application of our results. As discussed in Section 4.6, many GLMs satisfy the conditions of our results. Our theory suggests that by running SGD with two stepsizes in parallel and tracking the PR-averaged iterates for each, we can use RR-extrapolation to obtain a reduced-bias estimate. Additionally, we can construct confidence intervals under our CLT guarantee.
We also include numerical experiments in the **global rebuttal** to demonstrate our results. Specifically, we performed a set of experiments with $L_2$-regularized logistic regression with Markovian data to support our theoretical findings: the presence of bias and its reduction through Richardson-Romberg (RR) extrapolation, as well as the central limit theorem (CLT) of Polyak-Ruppert (PR) averaged iterates. Employing $L_2$-regularized logistic regression in these experiments also showcases the practical applicability of our results within the scope of generalized linear models (GLMs). We will add the numerical experiments in the revised paper.
**Comment: Additional reference.**
We thank the reviewer for pointing out the additional related works. All three works examine general (non-linear) SA under Markovian noise with a constant stepsize. Both [b] and our work prove weak convergence, but with different techniques. [b] provides an **upper bound** for the asymptotic bias, while we offer an equality and closed-form solution for the leading-order bias. [a] presents an upper bound for the PR-averaged iterates and, similar to our results, demonstrates the effectiveness of RR-extrapolation in reducing bias. Besides constant stepsize, [c] also explores diminishing stepsizes and examines the impact of the stepsize decay rate on the asymptotic statistics of PR-averaged SA. We will discuss these related works in our revised literature review.
**Comment: Assumption on exogenous Markovian noise.**
We acknowledge that our current model of Markovian noise does not account for dependence on $\theta$. In fact, the very recent work (pointed out by the reviewer) [a] considers the Markovian model that incorporates such dependence. We are confident that our work can be extended to adopt a similar modeling approach for the Markovian noise and achieve comparable results.
**Comment: Tighter higher order in the bias.**
We agree with the reviewer's comment that $O(\alpha^{3/2})$ might not be the tightest next order in the bias characterization, and we believe that $O(\alpha^2)$ can be obtained. Improving the next highest order to $O(\alpha^2)$ requires a more refined characterization of the asymptotic second order $E[(\theta_\infty-\theta^*)^{\otimes2}]$ by following a similar strategy as our current approach. Thus, we leave this refinement out of the scope of the current paper. We conjecture that, with appropriate assumptions on smoothness and noise moment, we can prove refined characterizations of higher orders of $E[(\theta_\infty-\theta^*)^{\otimes p}]$, and subsequently we obtain $E[\theta_\infty]=\theta^*+\sum_{n=1}^m\alpha^nb_n+O(\alpha^{n+1})$.
References:
[a] S. Allmeier and N. Gast. Computing the Bias of Constant-step Stochastic Approximation with Markovian Noise. 2024.
[b] C. K. Lauand and S. Meyn. Bias in Stochastic Approximation Cannot Be Eliminated With Averaging. 2022.
[c] C. K. Lauand and S. Meyn. Revisiting Step-Size Assumptions in Stochastic Approximation. 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for this detailed answer. I have no further questions. | Summary: The present paper obtains a representation for the asymptotic bias of constant step-size nonlinear stochastic approximation with Markovian noise. In particular, this characterization makes the hindering effect of memory and nonlinearity explicit in terms of the algorithm's performance. Moreover, the authors establish ergodicity of the parameter-noise joint process (with and without projection of estimates), obtain finite-time bounds on L_p moments of the estimation error and establish a central limit theorem for the constant gain algorithm. Finally, a bias attenuation technique based upon the Richardson-Romberg extrapolation is proposed for the nonlinear algorithm.
Strengths: The contributions and assumptions are clearly identified. To the best of the reviewer's knowledge, the results are novel and exciting: it is great to see the hindering effect caused by the interplay between memory and nonlinearity in SA.
This paper is well written, but could use some polishing.
I did not have enough time to review all proofs in detail, but the analysis seems correct.
Weaknesses: One of the weaknesses of this paper is the fact that no numerical experiments are provided to illustrate any of the main results. Although a discussion on how the theory fits within Generalized Linear Models is given before the conclusions, I encourage the authors to include a simple toy example to illustrate some of their main results such as the CLT or the bias attenuation technique.
Technical Quality: 2
Clarity: 2
Questions for Authors: - Could the authors clarify if assuming strong monotonicity and uniform boundedness of g and its Lipschitz constant over x are indeed needed for the result in Thm. 4.6? I can see their importance for finite-time bounds, but if there are needed for the asymptotic bias bound, I believe that the authors should mention the employment of stronger assumptions when comparing their work with previous research on asymptotic results.
- I am not sure I understand what the authors mean by ``fine-grained’’ when talking about the result in Thm. 4.6. Upon further inspection, it seems that equation (4.2) is an extension of the result in [40] for nonlinear recursions: it consists a representation for the dominant bias term plus an upper bound as in this previous work. Is the fine-grained part related to the upper bound for \alpha?
- I might have missed this in the text, but could the authors clarify if the results in Sections 4.3 and 4.5 require projection?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors addressed the limitations in their work through a clear list of assumptions and discussions after presenting the main results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our paper’s contribution and strengths, and for the constructive comments. We provide our detailed responses below. In the following, we use [1] [2] etc. to refer to papers cited in our submission, and [a] [b] etc. for new references, with bibliographic information given at the end of this reply.
**Comment: numerical experiments.**
We thank the reviewer for the suggestion. We include numerical experiments in the **global rebuttal** to demonstrate our results. In particular, we conducted two sets of experiments using $L_2$-regularized logistic regression with Markovian data to verify our theoretical results: the existence of bias and its attenuation via Richardson-Romberg (RR) extrapolation, and the central limit theorem (CLT) of Polyak-Ruppert (PR) averaged iterates. Using $L_2$-regularized logistic regression in the experiment also demonstrates the practicality of our results in the context of generalized linear models (GLMs). We will add the numerical experiments in the revised paper.
**Comment: Clarification of Assumption 3 (boundedness and smoothness of $g$) and Assumption 4 (strong monotonicity) for Theorem 4.6.**
We thank the reviewer for raising this point. For discussions on strong convexity and Lipschitz smoothness in proving weak convergence, we refer the reviewer to the global rebuttal. Here, we discuss the role of these two assumptions in bias characterization.
The key technique in bias characterization is Taylor expansion around $\theta^\ast$. The strong monotonicity and Lipschitz smoothness together ensures that $E\|\theta_\infty-\theta^*\|^{2p}$ is of order $O((\alpha\tau)^p)$, which subsequently controls the order of the residual term in Taylor expansion. Moreover, the algebraic manipulation in bias characterization involves the inversion of $\bar{g}'(\theta^*)$ (in the context of SGD, this is equivalent to the Hessian of the objective function). Therefore, the strong monotonicity (convexity) ensures the validity of this inversion.
We believe that we could potentially relax the strong monotonicity assumption to Hurwitz $\bar{g}'(\theta^\ast)$ to ensure the validity of such a matrix inversion. It is unclear what our approach would imply if we only have a positive semi-definite (not full-rank) $\bar{g}'(\theta^*)$. We conjecture that the joint process would still converge, but the bias may exhibit drastically different behaviors that we do not yet fully understand. This conjecture is based on [a], where the SA update is strongly convex and Lipschitz-smooth but non-differentiable at $\theta^*$. [a] shows that the bias admits a very different behavior, with the leading term scaling with $\sqrt{\alpha}$ instead of $\alpha$. Therefore, we currently do not have a clear conjecture for this case without the strong monotonicity assumption at $\theta^*$.
**Comment: "Fine-grained" characterization of bias in Theorem 4.6.**
We clarify how our bias characterization (4.2) -- (4.5) in Theorem 4.6 differs from the result (8) in [40]. We first remark that characterization in [40] is an \textbf{upper bound} ($\limsup$) on the averaged iterate's bias, not the asymptotic bias of the limiting random variable $E[\theta_\infty]-\theta^*$, since [40] does not prove weak convergence for general SA. Additionally, the result in [40] only shows that the leading term of bias is of $\alpha$-order. In contrast, we provide a closed-form expression in (4.2) -- (4.5) for the leading term of bias, which is computable if the underlying Markovian noise is known or can be approximated. Moreover, we show that the leading term can be decomposed into three components: Markovian noise $b_m$, non-linearity $b_n$, and the compound effect $b_c$, which cannot be implied from [40]. Therefore, our result is a more "fine-grained" characterization.
**Comment: Clarification on projection for results in Section 4.3/4.5.**
We thank the reviewer for raising this point. The results in Section 4.3 (non-asymptotic result) and 4.5 (CLT), as well as 4.4 (asymptotic bias characterization), are follow-up results to the weak convergence results in Sections 4.1 and 4.2 (weak convergence with or without projection). In obtaining these follow-up results, projection for uniform boundedness of iterates $\theta_t$ is not required. If we have weak convergence without projection, these results subsequently do not require projection. For discussions on removing the projection and minorization noise assumption, please refer to the **global rebuttal**.
References:
[a] Y. Zhang, D. L. Huo, Y. Chen, Q. Xie. Prelimit Coupling and Steady-State Convergence of Constant-stepsize Nonsmooth Contractive SA. 2024.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their clarifying and detailed responses. I also thank them for including an experiment that validates their theorems. Also, it is interesting to see the experiment with i.i.d. data performing worst than the experiment with Markovian data.
Here a few more observations:
**On a bias equation** I apologize If I was not clear in my first response. When I mentioned [40] I was referring to their equation (10) where there are no limsups and not (8).
However, it is very clear to me that this does not affect the value of our work since they do not incorporate the effects of nonlinearity. Still, I think I would be beneficial to mention that a similar fine-grained expression was obtained for the more restrictive case of linear $g$ in the final version.
**Strong Monotonicity** Maybe it would be beneficial to provide a bit more discussion regarding Assumption 3 in the final version like in the responses provided (e.g. when it is satisfied/ or if it could be lifted).
---
Reply to Comment 1.1.1:
Comment: Thank you for the clarification and response. We will include those discussions in our revised paper. | Summary: This paper considers a nonlinear stochastic approximation (SA) problem with Markov noise (MC). It is assumed that the MC is uniformly geometrically ergodic. Instead of the standard iterative procedure, a projection onto a bounded set is additionally introduced (the latter can be relaxed under the additional assumption of the existence of a positive density). Under these assumptions the authors manage to write a decomposition that characterizes the bias. At the same time, they manage to identify three factors influencing the bias: the factor of MC, the factor of nonlinearity of the procedure, and the factor of interaction between MC and nonlinearity. In addition, bounds for the Polyak-Ruppert averaging and the Richardson Romberg procedure are given.
Strengths: - Decomposition that characterizes the bias with explicit dependence on MC, non-linearity and interactions between MC and non-linearity.
Weaknesses: It would be good to obtain:
- high probability bounds instead of the MSE
- remove additional projection step and assumption on the density (it could be useful to consider convergence in the weighted W distance instead of V norm)
- explicit dependence on the asymptotic variance of MC in the $O(\tau/(k - k_0))$.
Technical Quality: 2
Clarity: 3
Questions for Authors: Could you please comment on the fact that there is no dependence between stepsize and number of iterates in the Polyak Ruppert avaraging?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our paper’s contribution and strengths, and for the constructive comments. We provide our detailed responses below. In what follows, we use [1] [2] etc. to refer to papers cited in our submission, and [a] [b] etc. for new references, with bibliographic information given at the end of this response.
**Comment: High probability bounds instead of MSE.**
We are grateful to the reviewer for this suggestion. We believe that we can derive high probability bounds by applying the Markov inequality to our Proposition 4.2 (convergence of $E\|\theta_k-\theta^*\|^{2p}$). Our bias characterization allows these bounds to be further tightened. If one would like to prove exponential tail bounds, stronger assumptions on the noise sequence, such as having exponential tails, are necessary as shown in [a] for iid data.
**Comment: Discussion on projection and Assumption 5 on minorization.**
We address this comment in the **global rebuttal**, as it is an important point.
**Comment: Discussion on explicit dependence on variance of Markov chain.**
We thank the reviewer for raising this point. We believe that it is straightforward to extend our results to characterize the dependence on the variance of the Markovian noise in the second moment of Polyak-Ruppert (PR) averaged iterates. We conjecture that by quantifying the variance introduced by the Markovian noise under an additional assumption that $\|g(\theta,x)-g(\theta,y)\|\leq \sigma_d$ for any $x,y\in\mathcal{X}$ and $\theta\in R^d$ (similar assumption in [46]), the explicit dependence of $\sigma$ would be $O(\sigma_d^2\tau_\alpha/(k-k_0))$.
**Comment: Discussion on dependence between stepsize and number of iterates.**
We thank the reviewer for bringing up this question. Our result allows for independent choices of stepsize $\alpha$ and the number of iterates $k$, provided that $\alpha$ is sufficiently small and $k$ sufficiently large. The first two terms of the second moment bound of the PR averaged iterates do not depend on the number of iterates, which verifies the presence of asymptotic bias with a leading term proportional to the stepsize $\alpha$. The remaining two terms depend on the number of iterates and will vanish as the number of iterates increases $k\to\infty$. The third term corresponds to the asymptotic variance and decays at the rate of $1/k$, and the fourth term corresponds to the optimization error. Our result also allows for optimizing the stepsize $\alpha$ if the number of iterates $k$ is known a priori. Suppose $k$ is fixed and $k_0=k/2$, the optimized $\alpha$ is obtained by balancing the first $\alpha^2$ term and the last $(1-\alpha\mu)^{k_0/2}/(\alpha (k-k_0)^2)$ term. The optimized order is $\alpha = O(k^{-2/3})$, which matches the order in [46], and, to the best of our knowledge, provides the tightest dependence for Markovian linear SA with constant stepsize. We would appreciate it if the reviewer could clarify any remaining questions to ensure our response fully addresses the question on the dependence between stepsize $\alpha$ and the number of iterates $k$ in the PR-averaging bound.
References:
[a] I. Merad and S. Gaïffas. Convergence and concentration properties of constant step-size SGD through Markov chains. 2023.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. I retain my current score. | Summary: This paper investigates stochastic approximation (SA) with Markovian data and nonlinear updates under constant stepsize, and establishes the weak convergence of the joint process $(x_t, \theta_t)$. It also presents a precise characterization of the asymptotic bias of the SA iterates.
Strengths: I find this paper well-written. The analysis appears to be correct (although details are not checked). Also, both the literature review and motivation are very clear. It seems to me that this paper has solved a challenging problem, caused by Markovian data and nonlinear updates.
Weaknesses: I find the presentation of this paper a bit technical for people that are not very familiar with this area. In addition, perhaps some numerical experiments should be performed to better illustrate the theory.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. About Assump. 3: for many GLMs, we actually do not have strong convexity. I feel this assumption is a bit strong.
2. Page 4, line 147: is the notation superscript "\cross 2" defined anywhere? I assume this denotes the outer product of a vector.
3. Similar to Assump. 3, Assump. 4 should be justified further as well.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our paper’s contribution and strengths, and for the constructive comments. We provide our detailed responses below.
**Comment: Numerical experiments.**
We thank the reviewer for the suggestion. We conduct a set of experiments, running SGD on $L_2$-regularized logistic regression with Markovian data, to verify our theoretical results: the existence of bias, bias reduction via Richardson-Romberg (RR) extrapolation, and the central limit theorem (CLT) of Polyak-Ruppert (PR) averaged iterates. Using $L_2$-regularized logistic regression also demonstrates the practicality of our results in the context of generalized linear models (GLMs). For a more detailed discussion and the experiment results, please refer to the **global rebuttal** and the figures in the one-page PDF.
We will add the numerical experiments in the revised paper.
**Comment: Discussion on Assumption 3 on strong convexity/monotonicity.**
We thank the reviewer for raising this point, and we refer the reviewer to the **global rebuttal** for clarification on the strong convexity/monotonicity assumption.
**Comment: the $\otimes$ notation.**
We use "$u\otimes v$'' to denote the tensor product of the two vectors $u$ and $v$, and "$u^{\otimes k}$" to denote the $k$-th tensor power of vector $u$. When $k=2$, it is simply the outer product of $M$. We thank the reviewer for pointing this out, and we will add this definition to the paper.
**Comment: Discussion on Assumption 4 on the noise sequence.**
We would like to further clarify Assumption 4, in which we assume the existence of the $2p$ moment of the noise sequence. We note that it is standard in the SA literature to assume the existence of the $2p$ moment of the noise sequence to control the $2p$ moment of the limiting random variable, as seen in [18,53,60] (cited in our submission). Moreover, we believe that this assumption is necessary; without a finite $2p$ moment for the noise sequence, the limiting random variable $\theta_\infty$ might not have a finite $2p$ moment. For example, consider the simple case $\theta_{t+1}=\theta_t-\alpha(\theta_t-w_t)$, with $\theta_0=0$ and iid noise $w_t$. Note that we have $\theta_T=\alpha\sum_{t=0}^{T-1}(1-\alpha)^{T-t}w_t$. Therefore, it is easy to see that if the noise sequence does not have a finite $2p$ moment, $\theta_\infty$ would not have a finite $2p$ moment.
---
Rebuttal Comment 1.1:
Comment: The authors have done a good job in responding to my previous comments. I find their responses clear and detailed. I have no further questions. | Rebuttal 1:
Rebuttal: We thank all reviewers for their insightful feedback. In this global rebuttal, we discuss smoothness, strong monotonicity, the use of projection, and the minorization assumption of the noise sequence. We also supplement our theoretical results with a set of numerical experiments, with results in the one-page PDF.
In what follows, we use [1] [2] etc. to refer to papers cited in our submission, and [a] [b] etc. for new references with bibliographic information given at the end of this response.
**Differentiability and strong monotonicity:** In proving weak convergence, we assume that the stochastic approximation (SA) update operator $g$ is sufficiently smooth (three-times differentiable) and strongly monotone (for SGD this implies strong convexity of the objective function). This ensures controlled evolution of the iterates with Markovian/correlated data. The differentiability supports a Taylor series expansion of $g$ up to the second order with a bounded remainder, crucial for analyzing the $\spadesuit$ term in the convergence proof in Section 3 and for bias characterization where Taylor expansion around $\theta^\ast$ is the key technique. Some form of differentiability assumption is standard in SA literature, such as [18,40,a], particularly when one seeks a fine-grained characterization of the iterates' distributional property beyond MSE bounds. Such an assumption is satisfied by many GLMs, such as logistic regression and Poisson regression. When $g$ is not differentiable, SA behavior can differ significantly (even with iid data) [g], which is beyond our scope.
The strong monotonicity assumption is common in SA literature. Together with smoothness, it allows us to establish geometric distributional convergence. While some GLMs by themselves do not satisfy this condition, applying $L_2$-regularization (equivalently, weight decay) ensures strong convexity and improves statistical performance. It is a standard calculation that one can appropriately choose the regularization parameter to derive tight results for non-strongly-convex functions.
We believe that it is possible to relax the strong monotonicity assumption to weaker conditions, Hurwitz $\bar{g}'(\theta^\ast)$ [a,40], or even to non-convex problems satisfying structural properties like dissipativity or generalized Polyak-Lojasiewicz (PL) inequality [60].
**Projection and minorization:** Projection steps have a longstanding presence in SA literature for tractability in convergence theory, as seen in many analyses of SGD [b,c,d,e,5]. Although not an algorithmic proposal, this additional projection step does not incur much computational cost in practice, as it only involves rescaling the iterates, and the projection radius can be estimated a priori. Before our work, no studies had proven weak convergence for non-linear SA with Markovian data and constant stepsize, with or without the projection. Thus, our result is valuable, as it is the first to prove detailed weak convergence in this setting.
In Theorem 4.3, we provide an alternative proof of weak convergence using the Drift and Minorization technique, which does not require a projection. We acknowledge that the minorization assumption is not satisfied by some noise models, but argue that it is easily met by adding a small noise with a continuous distribution, a common practice for promoting exploration and privacy.
In addition, it is possible to extend our current results and prove weak convergence without the minorization noise or the projection. This can be achieved by employing the established Drift and Contraction technique [f]. As discussed in Section 3, when $\|\theta_{k-\tau}^{[1]}-\theta^*\|^2+\|\theta_{k-\tau}^{[2]}-\theta^\ast\|^2$ is too large, obtaining a contraction in $E\|\theta_k^{[1]}-\theta_k^{[2]}\|^2$ may not be feasible. In this case, the drift and contraction technique suggests using the drift of $E\|\theta_k-\theta^*\|^2$ in Proposition 4.2 to carefully balance the distance $E\|\theta_k^{[1]}-\theta_k^{[2]}\|^2\lesssim (E\|\theta_k^{[1]}-\theta_k^{[2]}\|^2)^r(E\|\theta_k^{[1]}-\theta^*\|^2+E\|\theta_k^{[2]}-\theta^*\|^2)^{1-r}$ and obtain a contraction.
**Numerical experiments:** We run SGD on $L_2$-regularized logistic regression with constant step sizes, without projection. Figure 1 confirms the Central Limit Theorem (CLT) result for averaged iterates. Figure 2(a) verifies the presence of an asymptotic bias approximately proportional to the stepsize $\alpha$, and illustrates the effectiveness of Richardson-Romberg (RR) extrapolation in reducing this bias. We also compare the bias under Markovian data ($x_{t+1}\sim P(\cdot|x_t)$) and iid data ($x_t\sim\pi$) in Figure 2(b). Interestingly, Figure 2(b) reveals that Markovian data does not necessarily lead to a larger bias than iid data. This is consistent with our theory, as the three bias terms $b_m,b_n,b_c$ may have opposite signs leading to cancellation. This result suggests that in the presence of nonlinearity, one should not avoid Markovian data simply for the sake of reducing bias. Rather, RR extrapolation may be more effective for bias reduction.
References:
[a] S. Allmeier and N. Gast. Computing the Bias of Constant-step Stochastic Approximation with Markovian Noise. 2024.
[b] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust Stochastic Approximation Approach to Stochastic Programming. 2009.
[c] H. Kushner. Stochastic approximation: a survey. 2010.
[d] S. Lacoste-Julien, M. Schmidt, and F. Bach. A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient. 2012.
[e] S. Bubeck. Convex Optimization: Algorithms and Complexity. 2015.
[f] Q. Qin and J. P. Hobert. Geometric convergence bounds for Markov chains in Wasserstein distance based on generalized drift and contraction conditions. 2022.
[g] Y. Zhang, D. L. Huo, Y. Chen, Q. Xie. Prelimit Coupling and Steady-State Convergence of Constant-stepsize Nonsmooth Contractive SA. 2024.
Pdf: /pdf/1feb99b8f330dd88b80b84123202b01c6af7f637.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
PhoCoLens: Photorealistic and Consistent Reconstruction in Lensless Imaging | Accept (spotlight) | Summary: This paper addresses the challenge of spatially varying Point Spread Function (PSF) in lensless imaging and introduces a two-stage approach for reconstructing lensless images.
Strengths: This paper is well-written, presenting a detailed statement of the problem, the physical imaging model, an innovative reconstruction method, and thorough experimental results. I believe this paper will significantly impact the fields of both lensless imaging and the broader area of spatially-varying computational imaging, given its contributions to both the imaging model and reconstruction methods. The demonstrated reconstruction results are impressive, showing significant improvements over existing methods.
I believe this paper will make a significant contribution to both lensless imaging and the broader field of spatially varying imaging reconstruction. The proposed learnable spatially varying PSF will provide valuable insights for many other applications.
Weaknesses: The only two weaknesses of this paper are: (1) the lack of real-world experiments, but this is mitigated by the use of a standard lensless imaging dataset for evaluation, and (2) the paper would benefit from incorporating more knowledge and discussion from the optics field to strengthen its soundness.
There are two suggestions to make this paper more robust: (1) Expand Fig. 4(b) and Fig. A1 to illustrate how PSFs change with different fields of view. (2) Expand Fig. 2 and Fig. 4(d) to better demonstrate the significance of considering spatially varying PSFs, possibly including pixel-wise error maps of the reconstruction results.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Fig. 4(a) should be revised. Light coming from an off-axis angle should not be shown as being ‘refracted’ to produce parallel output light, as this is inconsistent with the proposed spatially-varying PSF model. Instead, I suggest that the author illustrate two point sources: one originating from the axis with output also parallel to the axis (in cyan), and another from an off-axis point with output also at an off-axis angle (in blue).
2. In Fig. 1(1), vignetting is observed, while the reconstructed images show no signs of vignetting. It would be better to explain in the paper how the vignetting effect was removed.
3. It would be beneficial to provide more explanations from the optical perspective in Section 3.2 to strengthen the paper's soundness in optics. The paraxial imaging model (spatially invariant convolution) is commonly used to simplify the forward imaging model, but it is inherently inaccurate, particularly for large field-of-view imaging. It would be good to add references to corresponding off-axis wave propagation models, such as “Shifted angular spectrum method for off-axis numerical propagation” by Matsushima, “Modeling Off-Axis Diffraction with the Least-Sampling Angular Spectrum Method” by Wei et al., and “Shifted band-extended angular spectrum method for off-axis diffraction calculation” by Zhang et al.
4. Spatially varying PSFs have also been used in other applications with improved results. Therefore, I think it is important to add references to these works. For example, “Aberration-aware depth-from-focus” by Yang et al. demonstrated improvement in depth estimation when considering spatially varying PSFs. “Correcting Optical Aberration via Depth-Aware Point Spread Functions” by Luo et al. proposed spatially varying PSFs for optical aberration correction and depth estimation. Additionally, “High-Quality Computational Imaging through Simple Lenses” by Heide et al. proposed spatially varying PSF imaging for simple lenses.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The limitations of this paper are well discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Real-world experiments**
The DiffuserCam and PhlatCam datasets utilized in our study are collected from real-world lensless camera prototypes. These datasets are widely recognized and extensively used in lensless imaging literature, providing robust and reliable benchmarks for evaluation. Thus, while our study focuses on standard datasets, they effectively represent real-world scenarios and ensure a comprehensive evaluation of our proposed methods.
**2. Discussion from the optics field**
We appreciate the reviewer's suggestion to incorporate deeper insights from optics. We agree that a more thorough optical analysis can strengthen our model's foundation. In the revised manuscript, we will provide a detailed derivation of the imaging model mismatch from an optical perspective to enhance the theoretical basis of our approach.
**3. Mitigating vignetting in WinnerDeconv outputs**
The appearance of vignetting is primarily due to the limitations of the shift-invariant convolution model which motivates us to propose a spatial-varying formuation. Our method effectively addresses vignetting through two stages. First, the proposed SVDeconv component in the first stage corrects for the inaccuracies of the shift-invariant model, mitigating vignetting artifacts. Second, the subsequent neural network stage learns a natural image distribution that is typically devoid of vignetting. This observation is supported by the fact that even a trained U-Net architecture, as employed in FlatNet, exhibits the ability to reduce vignetting.
**4. Revision on Fig. 4**
We appreciate the reviewer's suggestion. Figure 4(a) has been revised to accurately depict the light propagation as suggested. The updated figure is presented in *Fig. a of the rebuttal PDF*.
**5. Other comments**
Thanks for the suggestions on adding references. We will incorporate the recommended references to enhance the discussion on the theoretical foundations and applications of spatially varying PSFs in the revised manuscript.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks a lot for your rebuttal. | Summary: This paper proposes a method for reconstructing photorealistic images in lensless imaging. The reconstruction process, which aims to be consistent with observations while achieving photorealism, is based on range-null space decomposition. To accommodate realistic cameras, the method introduces SVDeconv, which learns the deconvolution process with a spatially-varying PSF simultaneously. Additionally, the reconstruction in the null space is performed using a pretrained diffusion model. Quantitative and qualitative evaluations are conducted using two datasets, PhlatCam and DiffuserCam.
Strengths: + For the generally challenging task of reconstructing high-frequency details in lensless imaging, introducing the concept of range-null space decomposition to achieve photorealistic and measurement-consistent image reconstruction is a very rational and technically sound approach.
+ The idea of using a generative approach solely for reconstruction in the null space, rather than relying entirely on generative priors for restoration, addresses the issue of hallucination. This approach showcases originality.
+ The effectiveness is also commendable as it achieves generally good results both quantitatively and qualitatively when compared with various other methods.
Weaknesses: 1. There is a lack of detail regarding the training process, making it difficult to understand correctly. It is unclear whether fine-tuning is performed using the input images at the test time or the PSF and parameters are freezed using a pretrained network.
1. The method of dividing the Spatially-Varying PSF (SV-PSF) into a 3×3 grid is somewhat naive. Particularly, I doubt that the spatial dependency of the PSF is also influenced by the target scene depth, which does not appear to be considered.
1. The analysis of the results is also insufficient. It is unclear how close the estimated PSF is to the accurately calibrated SV-PSF. The comparison between the reconstructed images using the accurate SV-PSF and the proposed method is not discussed. Additionally, for the dataset used in the evaluation, the inference results of the range-null content are not provided (as shown in Figure A2). The data fidelity in the reconstructed images is also not confirmed.
+ Comment: In eq(4), $N\times N$ is used, while previously $K\times K$ was used.
Technical Quality: 3
Clarity: 3
Questions for Authors: If there are any misunderstandings in the weaknesses pointed out, please clarify them.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations have been addressed well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Details of the training process**
In the first stage, SVDeconv is trained using range-space content derived from ground truth images. SVDeconv consists of two main parameter components: a learnable deconvolution kernel initialized with known PSFs, and a U-Net initialized with standard weights without pretraining. Once trained, SVDeconv processes input lensless measurements to estimate the range-space content of training samples. Subsequently, we use this estimated range-space content as input conditions for fine-tuning via null-space diffusion. During diffusion fine-tuning, we utilize a pre-trained diffusion model with frozen weights. We only train the supplementary conditioning modules like StableSR [35], to guide the reconstruction process effectively.
**2. Variation of spatially-varying PSF related to the depth**
In our lensless camera setup, the scene-to-camera distance significantly exceeds the sensor size (less than 1cm), typically around 40cm, which is practical for everyday capture scenarios. This distance effectively disregards spatial variance along the depth axis (from 30cm to infinity), treating the PSF at such distances as equivalent to that from an infinitely distant point source. Our simulations validate this by showing a 0.995 similarity score between the PSF of a light point at 30cm and 100cm distances. Such scenarios are widely accepted in the relevant literature [15,17,23,43] and align with the lensless imaging datasets used in our study.
Consideration of depth-dependent spatial variance in the PSF becomes critical only when the scene-to-camera distance is comparable to the camera size, typically between 1cm and 5cm. However, these scenarios require capturing scenes very close to our camera, which is beyond the scope of this paper. Future work will explore these 3D spatially varying PSF effects.
**3. Comparison of reconstruction using accurate SV-PSF and our method**
We conducted an additional experiment below, which shows that our method achieves comparable performance to SV-deconvolution methods when accurate PSFs are provided.
In real-world lensless camera datasets like the PhlatCam and DiffuserCam, accurately calibrated Spatially-Varying Point Spread Functions (SV-PSFs) for different incident angles are typically unavailable. To validate the effectiveness of our proposed method, we simulated a dataset using the simulated lensless camera discussed in our paper, incorporating known SV-PSFs. This dataset comprises 2000 images with 20dB noise. We evaluate four methods for SV-deconvolution: 1) spatially varying FISTA [a], 2) MultiWinnerNet [b] using a 3x3 grid of known PSFs, 3) our method using a single known PSF, and 4) our method using a 3x3 grid of known PSFs. Results in the following table demonstrate that our method achieves comparable performance to SV-deconv methods utilizing accurately calibrated Spatially-Varying Point Spread Functions (SV-PSFs).
| Methods | PSNR | SSIM | LPIPS |
|---|---|---|---|
| Spatially-varying FISTA [a] | 24.19 | 0.787 | 0.288 |
| MultiWinnerNet [b] (1 known PSF) | 24.72 | 0.796 | 0.273 |
| MultiWinnerNet [b] (3x3 known PSFs) | 25.88 | 0.832 | 0.261 |
| Ours (1 known PSF) | 25.47 | 0.811 | 0.265 |
| **Ours (3x3 known PSFs)** | **26.02** | **0.837** | **0.258** |
**4. Data fidelity in the reconstructed range-space contents**
For data fidelity in the reconstructed range-space contents, the results in Tab.2 of the original paper compare various deconvolution methods, showing that our approach achieves superior fidelity quantitatively. Reviewers can also refer to the qualitative examples in *Fig. b in the rebuttal PDF* for further confirmation.
**5. Other comments**
Thank you for pointing out the $𝑁$×$𝑁$ typo, we will revise it.
[a]. Yanny, Kyrollos, et al. "Miniscope3D: optimized single-shot miniature 3D fluorescence microscopy." Light: Science & Applications 9.1 (2020): 171.
[b]. Yanny, Kyrollos, et al. "Deep learning for fast spatially varying deconvolution." Optica 9.1 (2022): 96-99.
---
Rebuttal Comment 1.1:
Title: Reponse to the rebuttal
Comment: ### 1. Details of the training process
Thanks to the authors for the clarification.
### 2. Variation of spatially-varying PSF related to the depth
I expect the authors to address in the final version that the spatial dependency of the PSF is negligible in the target application and is out of scope for the paper.
### 3. Comparison of reconstruction using accurate SV-PSF and our method
The additional results are not convincing for me.
Can the authors clarify why the proposed method outperforms the SV-deconv methods utilizing accurately calibrated SV-PSFs?
It might be inadequate to simply compare the PSNR of the final results. I believe that some analyses such as checking and compare infromative paring the range- and null-space contents."を"I believe that additional analyses, such as checking and comparing the range- and null-space contents, would be more informative.
### 4. Data fidelity in the reconstructed range-space contents
The description for Tab.2 in the original paper is insufficient.
I appreciate the authors for providing the range space content in the rebuttal. It helps to clearly see the improvement achieved by the method.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response, and we greatly appreciate your insightful comments and advice regarding our work.
## 1. Comparison of reconstruction using accurate SV-PSF and our method
We understand that the proposed deep-learning-based approach outperforms traditional SV-deconvolution methods like spatially-varying FISTA on lensless imaging due to the key differences in the image priors that these methods have incorporated. It is known that there is some high-frequency information (null space content in our paper) loss in the lensless imaging process. Therefore, even with accurate SV-PSFs, a traditional iterative optimization-based method for inverse imaging can only recover the range space of the original capture, often resulting in over-smoothed outputs. In contrast, a well-trained neural network can learn image priors to recover the original scene with both range space content and null space content, therefore achieving better results. Similar observations have also been shown in the MultiWinnerNet[a] paper.
To further show the effectiveness of the proposed method in recovering the range space content, we compare the range space content of the following methods: 1) spatially varying FISTA [a], 2) MultiWinnerNet [b] using a 3x3 grid of known PSFs, 3) MultiWinnerNet [b] using a 3x3 grid of known PSFs, 4) our method using a single known PSF, and 5) our method using a 3x3 grid of known PSFs. The following Tab. A shows that our method achieves comparable performance to iterative optimization-based SV-deconv methods utilizing accurately calibrated SV-PSFs.
**Table A: Comparison of different methods on range space content reconstruction**
| Methods | PSNR | SSIM | LPIPS |
| --- | --- | --- | --- |
| Spatially-varying FISTA [a] | **30.60** | 0.958 | 0.069 |
| MultiWinnerNet [b] (1 known PSF) | 28.72 | 0.931 | 0.074 |
| MultiWinnerNet [b] (3x3 known PSFs) | 29.84 | 0.965 | 0.052 |
| Ours (1 known PSF) | 29.47 | 0.952 | 0.061 |
| **Ours (3x3 known PSFs)** | 29.98 | **0.974** | **0.048** |
We further substantiate our observation by comparing the null-space content recovery capabilities of the above methods, highlighting the contrast between iterative optimization-based approaches and deep-learning-based methods. As illustrated in the table below, optimization-based methods struggle to recover null-space content compared to deep-learning-based methods, even when accurate SV-PSFs are used. Furthermore, the optimization-based method (15 seconds per image) is much slower than the deep learning approach (0.025 seconds per image), and obtaining precise calibration of SV-PSFs for lensless cameras in real-world conditions is very challenging due to uncontrolled environmental light noise.
**Table B: Comparison of different methods on null space content recovery**
| Methods | PSNR | SSIM | LPIPS |
| --- | --- | --- | --- |
| Spatially-varying FISTA [a] |16.92 | 0.392 | 0.553 |
| MultiWinnerNet [b] (1 known PSF) | 22.48 | 0.579 |0.270 |
| MultiWinnerNet [b] (3x3 known PSFs) | 23.39 | 0.608 | 0.249 |
| Ours (1 known PSF) | 22.94 | 0.594 | 0.265 |
| **Ours (3x3 known PSFs)** | **23.68** | **0.611** | **0.243** |
Meanwhile, we greatly appreciate the reviewers for reminding us to compare the range space and null space content for additional insights. This comparison enhances our understanding of the differences between iterative optimization-based methods and deep learning-based methods, while also highlighting our contribution of introducing analysis of lensless imaging through range-null space decomposition.
## 2. Variation of spatially-varying PSF related to the depth
Thanks for your advice, we will try to extend our work to the 3D-SVDecov scenarios by introducing 3D coordinates of the focus centers.
## 3. Data fidelity in the reconstructed range-space contents
We will improve the clarity of the description for Tab. 2 in the original paper. | Summary: This paper proposed a deep learning-based approach for lensless imaging. To address the problem of model mismatch that simple convolutional models cannot accurately describe the lensless imaging process, this paper introduced a spatially-varying devolution module that reweights the deconvolution results from multiple kernels using spaitally-varying weights. In addition, a two-stage model based on range-null space decomposition. In the first stage, a spatially-varying deconvolution network is designed to reconstruct the low-frequency content; The second stage uses the output of the previous stage as a condition to guide the pre-trained diffusion model in reconstructing fine high-frequency image details associated to the null space of the measurement operator. The experiments were conducted on two datasets: PhlatCam and DiffuserCam.
Strengths: 1. Introducing diffusion model to reconstruct fine details of high-frequency content associated to the null space the measurement operator.
2. To some extent, the spatially-varying deconvolution solved the problem of model mismatch.
Weaknesses: 1. Using diffusion model to recover the null-space related image information is not new. E,g,,
[a] Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model. ICLR 2023.
2. Lack of experimental or deductive analysis processes to demonstrate the effectiveness or accuracy of using range-null space decomposition to describe the process of lensless image restoration.
3. The PSFs in lenless imaging are usually having a very large size (even larger than the image). Accordingly, the deconvolution kernels should be large. However, the deconvoluton kernel size seems small in the first stage.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1 . Why to simulate the PSF that has already been calibrated in the dataset, and what effect does the simulated PSF have in the algorithm?
2. Section 3 mentions that model mismatch is mainly caused by the presence of the incident angle θ. However, in the first stage of weight calculation, the error caused by only considering distance. Why?
3. What exactly does spatially-varying represent? How is the FoV center of the learnable kernel determined in weight calculation? What does the distance from point (u, v) to the FoV center exactly mean? Illustration with one figure could facilitate the understanding.
4. How to set the size and quantity of the learnable kernels?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The paper mentioned several limitations, such as computational cost due to introducing diffusion-based sampling and two-stage processing, as well as false details introduced by the diffusion model.
One possible limitation that is not mentioned in the paper is, the two-stage framework may have the robustness issue, the error from the first stage may affect the accuracy in the second stage, compared to an iterative framework. I suggest the authors to have a discussion on this issue.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Comparison with DDNM**
Our paper presents notable advancements compared to DDNM, in both the reconstruction quality and inference speed. Training-free methods like DDNM depend on an accurate imaging model to recover null space, but acquiring such a model for lensless imaging is difficult. In contrast, our null space diffusion learns also utilizes real-world training data pairs, improving its performance on real captures where the imaging formulation model is imperfect.
As demonstrated in Tab. 3 and Fig. 9 in the original paper, our approach consistently outperforms DDNM on real captures. Additionally, DDNM requires 3x more time than ours for the reconstruction Under lensless imaging setting. Unlike DDNM, which requires complex calculations at each sampling step to maintain consistency with the original measurement, our feed-forward model minimizes additional computational overhead while delivering superior results. Moreover, we introduce a novel imaging model that enhances the accuracy of estimating range-space content, crucially improving the fidelity of final reconstruction.
**2. Effectiveness of range-null space decomposition**
In Tab. 4 and Fig. 10 of the original paper, we conduct a comparative analysis of our diffusion model under various conditions. Specifically, for the SVD-OC method, we utilize a similar deconvolution approach to recover the original content, contrasting with our method that incorporates range-null space decomposition. The results consistently demonstrate our method's superiority over SVD-OC, highlighting the clear advantage of integrating range-null space decomposition in enhancing lensless image restoration.
**3. Clarification about deconvolution kernel size**
The deconvolution kernel size is as large as the PSF size of the lensless camera. For example, in the PhlatCam dataset (1280 × 1480, same as the lensless measurement), and the DiffuserCam dataset (540 × 960, larger than the original lensless measurement). For DiffuserCam, we employ replicate padding to align the PSF size with the padded measurements, as detailed in our paper. This ensures our method effectively handles the large PSF sizes typical in lensless imaging, maintaining accuracy in image restoration.
**4. The function of the simulated lensless camera and PSF**
The simulation of the lensless PSF is solely used to demonstrate the mismatch in widely used spatial invariant convolution models. We do not use the simulation in our lensless imaging experiments. For lensless datasets captured in the real world we used, like the DiffuserCam and PhlatCam datasets, we only have a calibrated PSF in the center (zero angle of incidence light). Therefore, we use the calibrated PSF to initialize all the learnable deconvolution kernels.
**5. Relation between the incident angle and the FoV center**
In a lensless setup with a 2-dimensional imaging plane, the Huygens-Fresnel principle [9] establishes a direct relationship between the incident angles ($\theta$, $\phi$) and the center shift of the PSF — known as the focus center. Specifically, if the PSF center for zero angles of incidence light is located at ($c_x$, $c_y$), then for incident angles ($\theta$, $\phi$), the PSF center approximately shifts to ($c_x - d \sin \theta$, $c_y - d \sin \phi$) according to the Fresnel propagation approximation [9], where $d$ denotes the mask-sensor distance. For detailed derivation, please refer to the Phlatcam paper [5], which we will further elaborate upon in the revised version. Reviewers can also refer to *Fig. a in the rebuttal PDF* for an illustration. This approach allows us to model the PSF variation across different focus centers on the imaging plane, reflecting the variation due to incident angles.
In our formulation, the center of the learnable kernel corresponds to a specific incident angle as well as coordinates on the imaging plane. We define this specific incident angle as the FoV center of the learnable kernel and the coordinates as the focus center of the kernel. Therefore, the weights can be determined by the distance between the pixel coordinates (u, v) and the focus center.
**6. Robustness of the two-stage Framework**
We appreciate the concern about the potential robustness issues of a two-stage framework. Actually, our training scheme is designed to mitigate errors introduced by inaccurate image formation in the first stage. Our range-space reconstruction in the first stage is a deterministic process, focused on recovering information directly observable from the lensless measurement. This reduces the risk of introducing artifacts or errors that would hinder the subsequent diffusion process. Additionally, our SVDencov component is specifically designed to handle the challenges of lensless image reconstruction, improving the accuracy of the first stage. We conducted ablation studies (Tab. 4 and Fig. 10) comparing different intermediate outputs, demonstrating that using the full estimated image reconstruction as input to the diffusion model can indeed amplify first-stage errors. But our range-space-based approach alleviates this issue. This is because recovering range-space content is easier than the original content in the first stage, therefore we achieve better data fidelity and fewer artifacts. *Fig. b in the rebuttal PDF* also shows the data fidelity of our first stage.
Moreover, iterative frameworks that project the output onto the lensless measurement space generally rely on precise imaging models, which are difficult to establish accurately in the context of lensless imaging. However, inaccurate modeling may introduce additional errors in the reconstruction process. Moreover, iterative methods often require more computational resources, such as DDNM and similar approaches.
Given these considerations, our method achieves a balanced trade-off between performance and efficiency.
**7. Choice of the number of deconvolution kernels**
Please refer to the reply to Reviewer FAra (R1).
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Thanks for the rebuttal which has addressed most of my comments. I would like to raise the score to Weak Accept. One limitation is the time complexity shown in the response to Reviewer FAra, mainly caused by the use of diffusion models and much higher than CNN-based methods. I understand this is the common limitation in existing diffusion-based methods, and hope that it could be addressed in future work.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer Ernw:
Thank you for acknowledging our explanation and raising the initial rating. We would like to respond if you have any further concerns. | Summary: This paper proposes a novel two-stage approach for lensless image reconstruction. The approach ensures data consistency with a spatially varying deconvolution method and enhances photorealism using pre-trained diffusion models. This method outperforms existing techniques in data fidelity and visual quality on PhlatCam and DiffuserCam systems.
Strengths: 1) The biggest strength of this paper lies in the quality of results displayed. The reconstructed images closely resemble the ground truth and are structurally very similar, unlike some of the other comparable methods.
2) Extensive evaluation has been performed. The method has been tested on two different lensless imaging datasets, which helps build confidence. The method outperforms peers on 6 difference metrics, 3 for quality and 3 for photorealism. The evaluation has been conducted on both range space recovery method and null space recovery method, demonstrating the merit of this approach.
3) Great clarity has been provided around the range space - null space decomposition, including mathematical derivations.
4) The method seems reproducible because of the great details provided in the paper, for both range space and null space reconstruction.
Weaknesses: 1) The paper mentions usage of 3X3 PSF kernels for Deconvolution. They arrive at this conclusion using experiments with different sizes. However, no mathematical reasoning or intuition has been provided as to why this is a good choice, and under what circumstances, it will break down. Without this explanation, it is very difficult to reuse the same model for a dataset captured with a different lenseless system. More work/reasoning is needed to support this choice of sampling.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1) This is minor, but why is Wiener Doconvolution referred to as WinnerDeconv everywhere in the paper?
2) Although the paper compares the proposed approach with several other methods for qualitative evaluation, it is unclear how these approaches compare when it comes to computational complexity. Typically, there is a trade-off between quality and compute. Therefore I would like to see a comparison of the computational complexity involved in reconstruction.
3) I would recommend citing the following paper:
V. Boominathan, J. T. Robinson, L. Waller, and A. Veeraraghavan, “Recent advances in lensless imaging,” Optica 9(1), 1 (2022). This paper gives an overview of the common modulation schemes used for lenseless imaging, and I believe your approach is only applicable for phase modulation masks and not for amplitude modulation masks?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: 1) Although the authors mention that real-time reconstruction is not possible, it is unclear how much time reconstruction actually takes. It would be great if this is explained as well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Choice of the number of deconvolution kernels**
The rationale for using 3x3 PSF kernels is based on the assumption that the central PSF effectively represents the region of the field of view (FoV) where PSFs change smoothly, which is typical in lensless settings. The 3x3 choice balances performance and cost: by discretely sampling a small number of the central region and their corresponding PSFs, we approximate the continuous PSF across the entire FoV while minimizing computational and memory costs.
Furthermore, opting for an odd number of PSFs (in both width and height dimensions) is recommended when employing a single calibrated PSF at the center of the FoV ($\theta$ = 0). This choice ensures that the initialized deconvolution kernel at the FoV center aligns accurately with the calibrated PSF, establishing a reliable starting point for the deconvolution process. In contrast, using an even number of kernels may result in inaccurate initializations across all deconvolution kernels, potentially compromising the effectiveness of the deconvolution. With an odd number, at least one correct initialization can be assured.
**2. Comparison of the computational complexity**
| Method | FlatNet [15] | Le-ADMM-U [23] | DDNM [37] | SVDeconv (first stage)| Null-space diffusion (second stage) | PhoColens (two stages total) |
|-|-|-|-|-|-|-|
| **Inference time (in sec)** | 0.013 | 0.047 | 3.447 | 0.025 | 0.781 | 0.806|
We evaluated the computational efficiency of the proposed method on a machine equipped with Intel(R) Xeon(R) Platinum 8352V CPU @ 2.10GHz and RTX 4090 GPU. The above table presents the Inference time results. Future work will explore optimal trade-offs between computational speed and performance for various application scenarios.
**3. Usage of WinnerDeconv**
Thanks for the reminder! We agree that we should use the correct term 'Wiener Deconvolution' in the main context to ensure clarity and accuracy and we apologize for any confusion caused.
**4. Phase modulation masks v.s. amplitude modulation masks**
Our model is applicable to all lensless cameras using a convolution imaging model, whether they employ phase modulation masks or amplitude modulation masks. However, for amplitude modulation lensless cameras like Flatcam [3], which utilize a separate imaging model, further exploration is needed to assess the effectiveness of the proposed method.
**5. Other comments**
In the revised version, we will include a citation to the paper 'Recent Advances in Lensless Imaging,' which contributes to our research by offering a comprehensive review of the definition and evolution of lensless imaging. | Rebuttal 1:
Rebuttal: We sincerely appreciate the time and effort dedicated by the ACs and reviewers in evaluating our work. We have carefully considered all comments and suggestions, and our detailed responses can be found in the rebuttal box below.
Pdf: /pdf/f562ca218a7df507c715c8de4e3a51f6946ac42c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Continuous Product Graph Neural Networks | Accept (poster) | Summary: The authors propose a new spectral GNN for Cartesian product graphs. The graph filter is chosen as Laplacian heat diffusion, which is separable across factor graphs. The stability and over-smoothing of the proposed model are studied. The authors perform synthetic experiments to validate the analysis. Experiments on real datasets also show the advantages of the proposed methods.
Strengths: The authors provide a rather thorough analysis of the proposed methods. Theoretical results are provided. Experiments are detailed. The paper is well-written and easy to follow.
Weaknesses: My main concern is with the over-smoothing analysis. The normalized product Laplacian as defined in Lemma 3.8. is generally not the normalized Laplacian of the product graph. It is more of a manufacture, so that the smoothness of the product graph decomposes to factor-wise smoothness terms. This also means that Eq. 15 is not true, if the LHS is defined as the 'actual' smoothness on the normalized product graph as in Eq. 13. I think the authors need to revise the analysis here.
Another concern is that the advantages of a separable graph filter are not made explicit. This is somewhat discussed in Remark 3.5., where the authors argue that factor-wise EVD is more efficient than EVD of the product graph. However, note that this is merely a consequence of the KP eigenvector structure of the Cartesian graph product [1], not the separability of the heat diffusion filter. You don't need a full EVD of the product Laplacian even if the filter is not separable. How a separable filter further benefits the computation is unclear to me.
Minor points: figure 2 is not very intuitive; figure 3 is also not very convincing.
[1] Stanley, Jay S., Eric C. Chi, and Gal Mishne. "Multiway graph signal processing on tensors: Integrative analysis of irregular geometries." IEEE signal processing magazine 37.6 (2020): 160-173.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Spectral GNNs that use polynomials of graph adjacency matrices only require local aggregation and do not need to compute global EVD. How does the complexity of the proposed method compare to that? Are there good reasons for choosing Laplacian-based filters over adjacency-based filters in spectral GNNs?
2. Can the authors provide more details on the competing methods for real graph data? What about the methods that also explicitly consider the product structure? The real datasets all have a temporal dimension. How do traditional SP methods such as [2] perform?
[2] Grassi, Francesco, et al. "A time-vertex signal processing framework: Scalable processing and meaningful representations for time-series on graphs." IEEE Transactions on Signal Processing 66.3 (2017): 817-829.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Future directions to improve the theoretical results are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We’ve expanded some points of our rebuttal in additional comments. We kindly invite the reviewer to read them if required.**
**W1**. We should have mentioned that, first, we need to define the Dirichlet energy for the tensorial data, $D_T(.)$, as the summation of the factor-wise Dirichlet energies on the unfolded metricized forms of the tensorial data $D_T(x):=\frac{1}{P}\sum_{p=1}^{P}{tr(\underline{X}\_{(p)}^\top\hat{L}\_p\underline{X}\_{(p)})}$, where $x=vec(\underline{X})$ and $\underline{X}_{(p)}$ is the $p$-th mode matricization of $\underline{X}$. The main intuition for this definition of $D_T(.)$ is that multi-domain data might be smooth on one of the factor graphs and non-smooth on the other factor graphs. Therefore, we cannot consider such cases as “pure” smooth signals on the product graph. This intuition aligns with relevant works in this direction [1,9], where these previous studies defined Diricihelet energy on the combinatorial Laplacian of the product graphs as the summation of the factor-wise Diricihelet energies. Note that the over-smoothing analysis in our paper holds in this case too. One can easily show that $D_T(x)$ is equivalent to writing the total Dirichlet energy using the product Laplacian as defined in Lemma 3.8 as $D_T(x)=x^\top \hat{L} x$, where $x=vec(\underline{X})$.
Once we have this definition for the Dirichlet energy, our definition of the Laplacian matrix in Eqn. (14) emerges naturally. This matrix has two favorable properties: i) it’s positive semidefinite, and ii) its spectrum lies in $[0,2]$ (as in regular normalized Laplacians) having a bounded spectrum for numerical instability. We agree with the reviewer that this Laplacian might not be generally a valid normalized Laplacian for the product graph and deserves further exploration in future work. We’ve corrected some sentences and added a remark regarding this discussion in the Appendix section of the revised manuscript.
**W2**. Let’s consider $h(L)=e^{-L\_{1}}\otimes \sum_{i=1}^{K}{L^{i}\_2}$ as a separable graph polynomial filter. Since $h(L)=(V_1\otimes V_2)(e^{-\Lambda_1}\otimes\sum_{i=1}^{K}{\Lambda^i_2})(V_1\otimes V_2)^\top$, the KP structure of the eigenvectors holds but we are not facing a valid well-defined graph product. However, instead of computing direct EVD of $h$ with complexity $(N_1N_2)^3$, we can first compute the factor graphs EVD with complexity $N_1^3+N_2^3$, and then $V_1\otimes V_2$ and $e^{-\Lambda_1}\otimes\sum_{i=1}^{K}{\Lambda^i_2}$. Although we agree about the relationships between having a heat diffusion filter defined on a Cartesian Laplacian and KP of separable heat filters, please note that computational benefits come from the possibility of selective eig-eiv pairs. For example, consider a heat diffusion defined on a Kronecker product graph adjacency. In this case, we have $h(A)=e^{-A_1\otimes A_2}=(V_1\otimes V_2)(e^{-\Lambda_1\otimes\Lambda_2})(V_1\otimes V_2)^\top$, and since we cannot write it as $h_1(A_1)\otimes h_2(A_2)$, we cannot use a selective strategy. While, in the Cartesian products, we can write $e^{-L_1}\otimes e^{-L_2}$ and also we have $e^{-L_1}=\sum_{i=1}^{N_1}{e^{-\lambda^{(1)}_i} v^{(1)}_1 {v^{(1)}_1}^\top}$. Therefore, selecting the most important pairs (as a low-rank approximation problem) has a theoretical justification, which is not generally true for Kronecker graphs.
**W3**. These minor points are answered in the comments.
**Q1**.
- It has been shown that computing the forward propagation in a spectral GNN using the Chebyshev polynomials is $\mathcal{O}(K|\mathcal{E}|F\_{in}F\_{out})$, with $K$, $|\mathcal{E}|$, $F\_{in}$, and $F\_{out}$ being graph filter order, number of edges of the product graph, input and output features, respectively [4]. For our product graph model, the number of edges can be expressed as $|\mathcal{E}|=\sum_{p=1}^{P}{(\prod_{i=1\ne p}^{P}{N\_p})|\mathcal{E}\_p|}$ [5]. Therefore, the complexity of our forward propagation in Eqn. (7) is $\mathcal{O}(K\_P+K\_PF\_l+K\_PNF\_l+K\_PF\_lF\_{l+1}+NK\_PF\_{l+1})$, where $K_P=\prod_{p=1}^{P}{K_p}$. Assuming often $K_P\ll N$, our complexity approximately takes the form of $\mathcal{O}(K_PN(F_l+F_{l+1}))$. So, we can say that the complexity of our forward function is linear in terms of the number of nodes of the product graph, and can be reduced by selecting the most important eig-eiv pairs in factor graphs $K_P$. Note that, we also need one preprocessing step of the factor graph’s EVD with a complexity $\mathcal{O}(N^2_pK_p)$, but it's only needed once.
- The PDEs on graphs are defined using Laplacian and not adjacency matrices, so using the adjacency leads to a lack of theory.
**Q2**.
- The full details about the baselines are provided in Appendix B.
- The most general method that relies on product graphs, to the best of our knowledge, is GTCNN which is already compared in our paper. However, GTCNN is limited to only two-factor graphs.
- Please note that we have already compared with the traditional (graph) SP methods GP-VAR [6] and GP-VARMA [6]. Indeed, GP-VAR and GP-VARMA are built upon [7].
References:
-----
[1] “Low-rank and smooth tensor recovery on Cartesian product graphs” IEEE International Conference on Sampling Theory and Applications, 2023
[2] “Graph neural networks exponentially lose expressive power for node classification” ICLR, 2020
[3] “A note on over-smoothing for graph neural networks” 2020
[4] “Convolutional neural networks on graphs with fast localized spectral filtering” NeurIPS, 2016
[5] “Learning product graphs underlying smooth graph signals” 2020
[6] “Forecasting time series with Varma recursions on graphs” IEEE TSP, 2019
[7] “A time-vertex signal processing framework: Scalable processing and meaningful representations for time-series on graphs” IEEE TSP, 2017
[8] "The emerging field of signal processing on graphs" IEEE SPM 2013
[9] “Product graph learning from multi-domain data with sparsity and rank constraints.” IEEE TSP, 2021
---
Rebuttal 2:
Title: More details about the responses
Comment: **W1**.
- As a simple toy example, consider a $4\times3$ matrix $U=[1,2,3;1,2,3;4,5,6;4,5,6]$, and also assume the factor graph as 4 and 3-node simple path graphs. So, $U$ is smooth on the 4-node graph but it is not the case with the other 3-node one, and, so, can not be considered a pure smooth signal on the product graph.
**W2**.
- For the sake of completing the discussion, consider we could write the resulting multi-way graph filter as $h(L)=h\_1(L_1)\otimes h\_2(L\_2)$, where $h\_{1(2)}()$ is a polynomial graph filter (with possible infinite order like heat graph filters). Then, one can use low-rank expansion $h\_1(L\_1)=\sum_{i=1}^{K\_1}{h\_1(\lambda\_i) v\_i v^T\_i}$. Therefore, the computation can be improved by choosing an appropriate small enough $K_1$.
**W3**.
- Figure 2 illustrates the effect of factor stabilities on the overall stability, as Theorem 3.7 states. By varying adjacency SNR for each factor (related to $\mathcal{O}(\epsilon_p)$), this figure validates the presence of factor stability on the overall performance. We've included a 3D plot in the **uploaded PDF** on the main rebuttal (Fig. R5) with a more intuitive view of the same experiment for better illustrating the interpretation of Theorem 3.7.
- Regarding Figure 3, please first note that it is on a logarithmic scale, which we forgot to mention and has been corrected in the revised version of the manuscript. The left plot is associated with the case of $\ln{s}-\frac{2\tilde{t}\tilde{\lambda}}{P}<0$, which is prone to over-smoothing and converges to zero very fast (exponentially). That's why the difference between $\ln{(\frac{E(X_l)}{E(X_0)})}$ and $l(\ln{s}-\frac{2\tilde{t}\tilde{\lambda}}{P})$ is extremely small. For the right plot in Figure 3, we have that $\ln{s}-\frac{2\tilde{t}\tilde{\lambda}}{P}>0$. Here, again the theorem is validated by the RHS bound in Eqn. (17). However, this bound is not tight and we leave further analysis as future work. We kindly refer the reviewer to [2] to get more familiarized with this kind of analysis and plots regarding over-smoothing in discrete GNNs.
---
Rebuttal 3:
Title: Further Responses
Comment: I thank the authors for their detailed reply.
W1. Please make sure to revise the paragraph at line 173. Also the author should realize that [9] used combinatorial Laplacian and that's why the smoothness is naturally decomposable. Normalized Laplacian is a whole different story. The authors should also pay attention to Eq.13 and Eq.15, and make clear distinction between them. Also make sure you don't accidentally use $E(U_t)$ as the form in Eq.13 in the proof.
W2. I think one certainly can make the filter 'selective' even without separability. For example, consider a low-pass filter $f_i$ on some factor $i$ and a non-separable filter $g$ on the product graph. Their composition $f_i \circ g$ is not separable either, but selective on factor $i$. To be more specific, you can compute the EVD of factor graphs, transform the data to spectral domain, apply a non-separable filter, then select a subset of frequencies for each factor. I don't see how this is not as efficient as using a separable filter like the heat diffusion.
---
Rebuttal 4:
Comment: Thank you for engaging in the discussion and for your helpful and insightful comments.
**W1**. We will revise all that's required to align with the correct conclusions from our discussion.
We agree that for the *general* normalized Laplacian, the decomposability might not be true. However, please notice that for our definition in Eqn. (14) in our paper, the decomposability holds. For example, if we consider two (general, and not even graph-related) factor matrices, the following relationship is valid for any general matrices $A,B,C$ with consistent dimensions [1]:
- $AX+XB=C \leftrightarrow (A\oplus B^\top)vec(X)=vec(C).$
Now, by considering $x=vec(X)$ and using this relationship, one can write:
- $vec(X)^\top\left[(A\oplus B^\top)vec(X)\right]=vec(X)^\top\left[vec(AX+XB)\right]=vec(X)^\top vec(AX)+vec(X)^\top vec(XB)=tr(X^\top AX) + tr(X^\top XB)=tr(X^\top AX) + tr(XBX^\top).$
If $A=\hat{L}_1$ and $B=\hat{L}_2$, the proof is completed. Please note that our over-smoothing analysis is easily adaptable for general combinatorial Laplacian matrices as well.
**W2**. We agree that even in the non-separable filters, one can select a graph frequency from the factor graphs. However, we don't find a strong theoretical justification for this. There might exist a logic in specific cases, like in [2], but this requires the general study of the product graph's spectrum. For example, consider the non-separable filters are $h(\lambda_1,\lambda_2)=e^{\lambda_1\lambda^2_2-\lambda^2_1\lambda_2}$, or even $h(\lambda_1,\lambda_2)=\cos{(\lambda_1\lambda_2)}$. In these cases, the importance of $\lambda_1$ on the spectrum of the product graphs is also tied with the importance of $\lambda_2$. Therefore, the most important eigenvalue in the first-factor graph for performing low-pass filtering is not necessarily the one with the highest importance in the product graph spectrum. This is because the importance of the factor graphs' spectra is hindered by the non-separable function. Please notice that the theoretical justification is clear in our formulation since, based on our response in the previous rebuttal stage, the selection procedure in our paper comes from factor-wise low-rank approximation subproblems.
As a final remark, please note that this whole discussion about the separability boils down to just one remark in our paper, and not to one of our main contributions. Therefore, we can easily make the relevant modifications without an important repercussion in our main takeaways.
References
----
[1] "The matrix cookbook", 2008
[2] "Learning Cartesian Product Graphs with Laplacian Constraints", AISTATS, 2024. | Summary: This paper proposes tensor PDE for graph neural networks temporal graph prediction. The construction is seems to be good and theoretical analysis for oversmoothing and stability are provided. Experiments show good performance with the proposed method compared to several baseline methods.
Strengths: 1) improvements on prediction accuracy fro the proposed method compared to existing methods.
2) Overall well-analyzed methods with some theoretical support on the behavior of the proposed method
Weaknesses: 1) There is no clear strategy to overcome oversmoothing.
2) Comprehensive details of hyperparameters is required. The lack of nature of hyperparameters and their ranges leads to difficulties in understanding the overall computational efficiency of the method.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) what are the training times for the CITRUS and its comparison to baseline methods (or the most relevant baseline)?
2) Over-smoothing effect is provided with a theorem and experimental validation, however, is there any suggestion or improvement in the proposed methods to overcome the over-smoothing?
3) what are the hyperparameters of CITRUS? how would you tune them? Do you hyparameter tune for eigenvector-eigenvalue pairs? Does some quantity such as rank is required for the tensor construction?
4) Is there a specific structure of graph such as homophily and hitherophily in temporal graph that the proposed method is in more favor of?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**. The training time comparison with the most relevant baseline, GTCNN, has been provided in the **uploaded PDF** on the main rebuttal (Tab. R2) for the NOAA and MetrLA datasets. We observe that our model requires less time per epoch to be trained.
**Q2**.
- Based on Eqn. (18), we can expect to slow down the over-smoothing phenomenon by keeping $\tilde{t}<\frac{P}{2\tilde{\lambda}}\ln{s}$. This explains why we should not increase the receptive field parameter $t$, especially when dealing with strongly connected graphs or deep GNNs. This finding is consistent with previous research on over-smoothing in discrete GNNs [1]. In the **uploaded PDF** on the main rebuttal (Fig. R3), we show the values of the learned receptive fields $t$ by CITRUS across different numbers of horizons. For this experiment, we have $b=7.08$, where $b=\frac{P}{2\tilde{\lambda}}\ln{s}$. We observe in the figure that CITRUS always learns $t<b$ to slow down the over-smoothing for all horizons.
- To further exemplify the implications of our theoretical analysis of over-smoothing, we now consider the graph receptive field $t$ as a hyperparameter in CITRUS and design an experiment for different values of $t$. In this experiment, we vary $t\in\{0.1,1,5,10,20\}$ and monitor the convergence to the over-smoothing state. We include the results of this experiment in the **uploaded PDF** on the main rebuttal (Fig. R4). We observe that for values of $t$ higher than $b$, CITRUS converges faster to the over-smoothing state for a larger number of layers. In practice, alleviating over-smoothing might not be enough to achieve good performance. Therefore, for $t<b$, we might need to perform other kinds of analyses like over-squashing, which we leave for future work.
**Q3**. CITRUS has several hyperparameters related to typical architectural and optimization choices in neural networks, such as the number of layers, learning rate, weight decay, etc. We’ve included all hyperparameters in the Appendix of the revised manuscript. Apart from these typical hyperparameters, the number of selected eigenvalue-eigenvector pairs $k$ is unique to our framework.
We can choose an appropriate $k$ using two methodologies: supervised and unsupervised. For the supervised option, we can use cross-validation, as we did in our manuscript (Table 4). For the unsupervised method, we can analyze the Laplacian matrices for a low-rank approximation task. For example, in the **uploaded PDF** on the main rebuttal (Fig. R2), we plot the explained variances of the principal components (i.e., $||v\lambda v^\top ||_F^2$ for $v$ and $\lambda$ being the eigenvectors and eigenvalues) on the MetrLA dataset. We observe a strong concentration of variance in a few components. We also observe that we can capture almost 80% of the explained variance by only relying on 50 eigenvalue-eigenvector pairs. In this case, the recommended range of $k$ is about 10-25% of the number of nodes.
Choosing an appropriate $k$ is indeed related to an efficient rank approximation on the factor Laplacians.
**Q4**. Our framework does not make any assumption about homophily or heterophily properties of the underlying graphs. We have reported the intra-homophily (spatial dimension) and inter-homophily (temporal dimension) of the MetrLA and PemsBay datasets in Tab. R3 in the **uploaded PDF** on the main rebuttal, using the proposed algorithms in [2,3]. We observe in Tab. R3 that these datasets have a mix of low and high measures, indicating a combination of homophily and heterophily behaviors. Therefore, our framework can efficiently handle both homophily and heterophily cases.
References:
-----
[1] “Graph neural networks exponentially lose expressive power for node classification,” ICLR, 2020
[2] “Greto: Remedying dynamic graph topology-task discordance via target homophily,” ICLR, 2023
[3] “Graph neural networks for graphs with heterophily: A survey”, 2022
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for the response.
I think the listing of the hyper-parameters and their ranges are important to have in the revised version of the paper. Furthermore, I think more studies should be helpful on commonly used hetherophilic graph datasets in the revised paper.
---
Rebuttal 2:
Comment: Thank you for your insightful comments and for engaging in the discussion.
- As suggested, we will list the range and precise values of our hyperparameters in the Appendix.
- Regarding the heterophilic datasets, we should first mention that measuring the homophily indices in spatiotemporal settings is more challenging than in regular node classification scenarios. In our case, we are facing two intra-graph (the spatial dimension) and inter-graph (the temporal domain) aspects. The study in [1] considers Metr-LA, PemsBay, and temperature datasets (similar to Molene and NOAA used in our paper) as highly heterophilic.
To complement our rebuttal, we’ve studied the heterophilic measurements on the Molene and NOAA datasets, which we provide in the following table (${\pi}^{s}\_{v\_i}$ and ${\pi}^{T}\_{v\_i}$ corresponds to intra and inter-graph homophily, respectively). Due to the wide range of variability across these datasets, we have already evaluated our performance on various datasets in terms of homophily-heterophily levels. For instance, Molene is highly heterophilic in both the intra and inter-graph domains, while NOAA acts more homophilic than Molene in the inter-graph domain. We can also compare the provided metrics for NOAA with the Temperature or KnowAir datasets used in [1], where we observe similar dynamics. We will include these discussions related to homophily-heterophily metrics in the Appendix of the revised manuscript.
| Datasest | ${\pi}^{s}\_{v\_i}:p\_s$ | ${\pi}^{s}\_{v\_i}:q\_p$ | ${\pi}^{s}\_{v\_i}:q\_n$ | ${\pi}^{T}\_{v\_i}:p\_s$ | ${\pi}^{T}\_{v\_i}:q\_p$ | ${\pi}^{T}\_{v\_i}:q\_n$ |
|----------|----------|----------|----------|----------|----------|----------|
| MetrLA | 0.2273 | 0.4732 | 0.2995 | 0.3325 | 0.4920 | 0.1755 |
| PemsBay | 0.1073 | 0.5912 | 0.3015 | 0.2399 | 0.6863 | 0.0738 |
| Molene | 0.0148 | 0.6248 | 0.3602 | 0.0152 | 0.5545 | 0.4301 |
| NOAA | 0.1249 | 0.5345 | 0.3405 | 0.2862 | 0.4351 | 0.2786 |
References:
---
[1] "Greto: Remedying dynamic graph topology-task discordance via target homophily”, ICLR, 2023 | Summary: The paper proposes CITRUS, a novel model for jointly learning multidomain couplings from product graph signals based on tensorial PDEs on graphs (TPDEGs). By modelling these representations as separable continuous heat graph kernels as solutions to the TPDEG, the paper shows that the underlying graph is actually the Cartesian product of the domain-specific factor graphs. Then the paper studied the stability and over-smoothing aspects of CITRUS theoretically and experimentally. Finally, the paper applies CITRUS to tackle the traffic and weather spatiotemporal forecasting tasks on public datasets, illustrating its effectiveness compared to state-of-the-art methods.
Strengths: 1. The paper is overall well written and easy to follow (except some notations could be better defined).
2. Handling multi-domain graph data is an interesting and challenging problem in practice.
3. Learning joint (and not sequential) spatiotemporal couplings by modeling these dependencies using product graphs with learnable receptive fields is an interesting idea that is shown to be effective.
4. The evaluation is done with respect to a diverse set of baselines.
Weaknesses: 1. Bad notations: too many tilde, lower bar, subscript that seems unnecessary to me. They make it hard to read the math. Math is hard to communicate so good writing is essential.
- For example, what is purpose of having tilde in (3) and (4)? Then it disappeared in (5).
- Is lower bar for $U$ necessary? why not just go with $U$?
- is $\times_i$ mode-i tensorial multiplication? what is the precise definition?
- One minor thing --- $:=$ is used in line 120 but not line 123. Please check for consistency of other notations as well.
2. The bound in Theorem 3.9 seems to be too loose to be meaningful in some cases, as shown in Figure 3, right plot that while the bound suggests divergence, the actual $E(X_l)$ converges to zero. Also, what is the implication of Theorem 3.9 on over-smoothing besides that we can focus on one factor graph for the phenomenon?
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Could the authors provide the standard deviation for the experimental results reported in table 1-4?
2. What are typical problems involving more than two factor graphs?
3. I see that Cartesian product arises in solutions to the TPDEG, that is why the CITRUS uses Cartesian product. I wonder if the authors have any intuition why this product is a good for modelling spatiotemporal couplings in practice.
4. It is also interesting that CITRUS seems to outperform especially in longer horizon predictions. Any investigation why it is the case?
5. Any guided way to choose k? Also any idea why a very small k seems to be a good enough approximation?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**. We’ve significantly simplified the notations in our paper. For example, we removed the unnecessary tildes from the notation in the revised manuscript.
- Regarding the lower bar for tensors, we find it important to differentiate tensors (higher-order data) from regular matrices as in [9], but, if you have found it hard to read, we will think of alternating notations, like in [1].
- For the tensor product, this is an extension of the matrix multiplication operation. For example, on 3D tensors, $\underline{X}=\underline{G}\times\_1 A$ is equal to $\underline{X}\_{(1)}=A\underline{G}\_{(1)}$, where the matrix $\underline{G}_{(1)}$ is obtained by concatenating the mode-1 slices of the tensor $\underline{G}$. We’ve included a detailed description in the Appendix of the revised manuscript. For more information please refer to Sections 2.4-5 in [1].
- In line 120, we have the definition of the product Laplacian, but the equations in line 123 are EVD forms of Laplacians, not definitions themselves.
**W2**.
- In Fig. 3, we forgot to mention that the y-axis is on a logarithmic scale. Indeed, both lines are very close in non-logarithmic scales. We’ve corrected the label on the y-axis. Regarding the divergence, please note that our theorem states that $\ln{(\frac{E(X_l)}{E(X_0)})}\le l(\ln{s}-\frac{2}{P}t\lambda)$. Therefore, when $\ln{s}-\frac{2}{P}t\lambda$ is positive, the RHS here grows with an increase in the number of layers. On the other hand, there is no problem with $E(X_l)$ going to zero because the theorem holds and is validated by the experimental results. However, in this case, the bound is looser than the case of $\ln{s}-\frac{2}{P}t\lambda<0$.
- Theorem 3.9 aims to characterize and provide insights about the over-smoothing aspects of CITRUS. To see the effect of all factor graphs, by using Lemmas A.2-A.4, we can also obtain $E(X_{l})\le e^{l\left(\ln{s}-\frac{2\sum_{p=1}^{P}{t^{(p)}\lambda^{(p)}}}{P}\right)}E(X_{0})$. Therefore, we observe that over-smoothing is affected by the weighted average of the factor graph spectra, weighted by their receptive fields. We have provided further over-smoothing diagrams for varying sparsity levels for factor ER graphs in the **uploaded PDF** on the main rebuttal (Fig. R1). We observe that the denser the factor graphs (i.e., higher values for edge probability $p$), the faster the convergence to the overall over-smoothing state. This addresses the reviewer’s concern about the factor graph effects on over-smoothing. This aligns with results in the literature, e.g., [2], about a higher possibility of over-smoothing with denser graphs. We’ve added this discussion as a Remark in the Appendix of the revised manuscript.
**Q1**. The standard deviations (STDs) have been provided in the **uploaded PDF** (Tab. R1) on the main rebuttal. Additionally, please note that we included some STDs in Tables 5, 6, and 7 in the Appendix.
**Q2**. There are several possible examples of product graphs with more than two-factor graphs. For example, video data can be represented as a three-factor graph (width, height, and time). Similarly, in sleep staging with brain signals, there are multiple dimensions, such as temporal sleep windows, the spatial dimension of electrodes on the head, the frequency domain, and the time domain [3].
Another possible application could be in recommendation systems, where we have an item graph, a user graph, and a feature space of the user-item elements.
**Q3**. Cartesian product graphs are useful for processing spatiotemporal data because we can model it as the Cartesian product of a spatial graph and a simple path graph (time dimension). By copying the spatial graph through time, one can model time-varying graph signals. This type of modeling has a rich literature in graph signal processing. The reviewer is kindly referred to [4-7], and also to Section 5 in the first chapter of [8].
**Q4**. The main difference between CITRUS and other baselines lies in its ability to learn the graph receptive fields $t$, which is difficult for other baselines (with fixed or non-adaptive receptive fields) to estimate. We've plotted the learned $t$ for different numbers of horizons in the **uploaded PDF** on the main rebuttal (Fig. R3). We observe that the estimation of $t$ is more robust for longer horizons since $t<b=P\ln{s}/2\lambda$ (here, $b=7.08$). According to our theoretical analysis, this alleviates the over-smoothing phenomenon for longer horizons.
**Q5**. We can choose an appropriate $k$ using two methodologies: supervised and unsupervised. For the supervised option, we can use cross-validation, as we did in our manuscript. For the unsupervised method, we can analyze the Laplacian matrices for a low-rank approximation task. For example, in the **uploaded PDF** on the main rebuttal (Fig. R2), we plot the explained variances of the principal components of the spatial Laplacian (i.e., $||v\lambda v^\top ||_F^2$ for $v$ and $\lambda$ being the eigenvectors and eigenvalues) on the MetrLA dataset. We observe a strong concentration of variance in a few eigenvalues. We also observe that we can capture almost 80% of the explained variance by only relying on 50 components.
References:
-----
[1] “Tensor decompositions and applications,” SIAM Review, 2009
[2] “Graph neural networks exponentially lose expressive power for node classification,” ICLR, 2020
[3] “Learning product graphs from spectral templates,” IEEE TSIPN, 2023
[4] “Big data analysis with signal processing on graphs”, IEEE SPM, 2014
[5] "Product graph learning from multi-domain data with sparsity and rank constraints,” IEEE TSP, 2021
[6] "Learning product graphs from multidomain signals,” IEEE ICASSP 2020
[7] “Product graph Gaussian processes for multi-domain data imputation and active learning,” IEEE EUSIPCO, 2023
[8] "Vertex-frequency analysis of graph signals", Springer, 2019
[9] "Era of big data processing: A new approach via tensor networks and tensor decompositions". 2014
---
Rebuttal Comment 1.1:
Title: Response to author's rebuttal
Comment: Thank the authors for their rebuttal. There are a few further comments and questions I want to discuss:
W2: I guess my concern was not that the Theorem is wrong but not tight enough. It seems that the authors admit that the current bound might be vacuous in certain cases. Is there a way to make the results tighter?
Q4: What do you mean when saying "the estimation of is more robust for longer horizons" in Fig R3 --- do you mean the variance is smaller? I can see that this learned t is not under the scenario of oversmoothing established in the theoretical results and hence it helps alleviate oversmoothing. But I wonder if alleviating oversmoothing itself is the only reason behind the good long horizon performance (so other methods are bad because they oversmooth?)
---
Rebuttal 2:
Comment: Thank you for engaging in the discussion.
**W2**. While it's true the right plot of Fig. 3 is not tight enough, we should emphasize this is an open research question. Our findings align with previous work in over-smoothing, please refer to Fig. 2 in [1] for example, where a similar loose bound was found.
We now provide insights for possible future directions to find a tighter bound:
- The RHS in Theorem 3.9 also depends on the maximum singular value of the learnable weights matrices $W\_{l1},...,W\_{ll}$, *i.e.*, $s$, which solely depends on the training process. We can use techniques like weight normalization [1] to bound $s$.
- When there's a big gap between the arguments of the Lipschitz functions, we might use integral Lipschitz functions [2] since in that case $|f(x\_2)-f(x\_1)|\le C\frac{|x\_2-x\_1|}{|x\_2+x\_1|/2}$. This is tighter than an absolute Lipschitz function, but its effect on the factor graph spectrums needs further theoretical exploration.
- Previous works have found that the Frobenius norm might not be very tight as an upper bound for the spectral norm [3]. Precisely, $\\|T\\|\le\\|T^K\\|^{\frac{1}{K}}=\sup\_{x\ne 0}{\left(\frac{\\|T^Kx\\|}{\\|x\\|}\right)^{\frac{1}{K}}}=\sup\_{x:\\|x\\|=1}{(\\|T^Kx\\|)^{\frac{1}{K}}}$, for some finite $K$ [3]. Since $E(x)=x^\top L x=\\|L^{1/2}x\\|\_F^2=\sum_{i=1}^{N}{\lambda\_i\tilde{x}^2\_i}$ with $\tilde{x}$ being the GFT of $x$ (which is a weighted nuclear norm with direct relationship with the spectral norm), we might use other norms like Sobolev [4], gradient-adapted [5], or other $p-$norms [4,6].
- Recently, some works have explored alternatives to the regular triangular inequalities [7], which might not be tight for the over-smoothing analysis.
In summary, there are some future directions to make our bounds tighter, but they need to be carefully studied in the case of continuous filters in product graphs.
**Q4**. By robustness, we mean smaller standard deviations in prediction performance as shown in Tab. R1 in the **uploaded PDF file of the main rebuttal**.
Regarding the longer horizons, the authors from the GTCNN model [8] (which can be considered as a particular case of CITRUS for discrete, two-factor graphs) said: "The GTCNN outperforms the other models in a short horizon while Graph WaveNet and GMAN work better for longer horizons. The benefits in the short term are due to high-order spatiotemporal aggregation in the GTCNN which allows capturing efficiently the spatiotemporal patterns in the data". Therefore, we state the GTCNN model doesn't perform well in longer horizons due to the non-optimal receptive field of the discrete case. More formally, consider a one-layer CITRUS network without non-linearity for predicting one horizon as $\tilde{y}=e^{-tL\_{\diamond}}Xw$. The loss function is therefore given by $\min\_{w}{\\|y-e^{-tL\_{\diamond}}Xw\\|\_2^2}=\min\_{w}{w^\top X^\top e^{-2tL\_{\diamond}}Xw-2y^\top e^{-tL\_{\diamond}}Xw}$. Here, the second term tries to maximize the inner product (the similarity) between the target $y$ and predicted output $\tilde{y}=e^{-tL\_{\diamond}}Xw$. The first term $f_{CITRUS}=w^\top X^\top e^{-2tL\_{\diamond}}Xw$ enforces the output to be as smooth as possible on the heat diffusion filters $e^{-tL\_{\diamond}}$. For a simple GTCNN [8], this smoothness term takes the form of $f\_{GTCNN}=w^\top X^\top L\_{\diamond}^2Xw$. If we consider longer horizons, these terms will accumulate leading to bigger smoothness terms in the loss function, likely leading to over-smoothing. The key difference between $f\_{CITRUS}$ and $f\_{GTCNN}$ is that our model learns $t$ to alleviate this accumulation issue as observed in Fig. R3 in the **uploaded PDF file of the main rebuttal**. Please also note that we can also have better control of the spectrum of $f\_{CITRUS}$ because of the learnable heat kernel.
Apart from the role of $t$ in $f_{CITRUS}$, other aspects besides over-smoothing could also play an important role, like over-squashing and over-fitting, which we leave for future work.
References
----
[1] “Graph neural networks exponentially lose expressive power for node classification”, ICLR, 2020
[2] “Stability properties of graph neural networks”, IEEE TSP, 2020
[3] “Learning interface conditions in domain decomposition solvers”, NeurIPS, 2022
[4] “Reconstruction of time-varying graph signals via sobolev smoothness”, IEEE TSIPN, 2022
[5] “Time-varying graph signal reconstruction”, IEEE JSTSP, 2017
[6] “Flow smoothing and denoising: Graph signal processing in the edge-space”, IEEE GlobalSIP, 2018
[7] “Studying the effect of gnn spatial convolutions on the embedding space’s geometry”, UAI, 2023
[8] "Graph-time convolutional neural networks: Architecture and theoretical analysis", IEEE TPAMI, 2023.
---
Rebuttal 3:
Title: Thank you
Comment: I thank the authors for their detailed answers, which help me understand the work better. For Q4, one thing I am not sure is that whether the smaller standard deviations in prediction performance shown in Tab. R1 are due to more accurate estimation of $t$ in longer horizons --- as the trend for standard deviations seems similar for GTCNN.
I will keep my score and stay on the positive side.
---
Rebuttal 4:
Comment: We thank you again for the insightful discussion.
Regarding your mentioned point, actually, this is almost what we expected based on our previously provided detailed response, in which we showed both GTCNN and CITRUS naturally embed a graph filter smoothness term on the product graphs in their loss functions. Based on the deep study of the behavior of the smoothness regularization terms [1,2,3], they essentially do not allow the response to have high variations (around the mean). But, as we already outlined in our previous response, this term has a pure accumulating nature in GTCNN leading to (most probably) higher importance to gain a smooth response, while, in CITRUS, this term is getting adapted and modified with learnable importance (due to learnable graph receptive fields in our approach). In summary, we expected smooth results from both GTCNN and CITRUS, but more accurate and optimized for CITRUS.
Thanks again.
References:
---
[1] "Graph regularized nonnegative matrix factorization for data representation." IEEE TPAMI, 2010
[2] "Fast robust PCA on graphs." IEEE JSTSP, 2016
[3] "Robust principal component analysis on graphs." ICCV, 2015 | Summary: The authors propose Tensorial Partial Differential Equations (TPDEG) to model multidomain data. Then they propose Continuous Product Graph Neural Networks (CITRUS) as a continuous solution. They provide theoretical and experimental analysis of their proposed approaches. They test their approach on spatiotemporal forecasting tasks and show SOTA performance.
Strengths: * The paper is well written.
* The proposed methods are backed by theory.
* The experimental results are SOTA.
* Ablation study is provided.
Weaknesses: See questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is it possible that real world graphs won't be representable as a cartesian product of factor graphs? How much approximation error is observed due to this assumption?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors address the limitations of their work as the last paragraph of their paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: One example where real-world graphs cannot be represented as a Cartesian product of factor graphs is the case where we have time-varying dynamic graphs, which might lead to some approximation errors by assuming the resulting graph is a Cartesian product graph. Indeed, this comment relates to the general issue of inaccurate adjacencies, denoted as $\tilde{A}=A+E$, which we studied in the stability analysis of CITRUS in Section 3.2. Therefore, the approximation error is bounded in Theorem 3.7.
Another case is when the real-world graphs follow a different product graph operation. For example, let's consider two well-known graph products: the Strong product and the Kronecker product. Suppose the true graph is actually a Strong product graph, $A=A_1\otimes A_2+A_1\otimes I_2+I_1\otimes A_2$, but it is mistakenly treated as a Cartesian product $\tilde{A}=A_1\otimes I_2+I_1\otimes A_2$. In this case, the error $E$ is given by $E=A-\tilde{A}=A_1\otimes A_2$, where $||E||=\lambda^{(1)}\_{max}\lambda^{(2)}\_{max}$. Therefore, the approximation error depends on the multiplication of the maximum eigenvalues of the factor graphs.
Similarly, if we consider Kronecker product graphs, $\tilde{A}=A_1\otimes A_2$, we find that $||E||\le\lambda^{(1)}\_{max}+\lambda^{(2)}\_{max}+\lambda^{(1)}\_{max}\lambda^{(2)}\_{max}$. In this case, the summation of the maximum eigenvalues of the factor graphs matters too.
In summary, the bound on the approximation error depends on the spectrum of the factor graphs.
---
Rebuttal Comment 1.1:
Comment: Thank you for reply. I acknowledge reading the rebuttal. | Rebuttal 1:
Rebuttal: We express our gratitude to the reviewers for their thoughtful and constructive feedback. We are encouraged by their recognition of several strengths in our work. In particular, the reviewers found that: "*The proposed methods are backed by theory*" (reviewers wQ9d, QFxx, and Y9UP), "*The experimental results are SOTA*" (reviewers wQ9d and QFxx), "*The paper is overall well written and easy to follow*" (reviewers wQ9d, gs8b, and Y9UP), and the paper presents a "*thorough analysis of the proposed methods*" (reviewers QFxx and Y9UP). Besides, they found that “*handling multi-domain graph data is an interesting and challenging problem in practice*” (reviewer gs8b).
We have addressed each of the reviewers' comments in a detailed manner, focusing on clarifying any points of uncertainty and resolving any misunderstandings that may have arisen. Our responses are organized in a point-by-point format, as outlined in the subsequent sections of this rebuttal. A summary of these responses is listed as follows:
* We have provided additional analyses on the over-smoothing phenomenon and discussed how to alleviate it.
* We discussed the approximation error that occurs due to the Cartesian product assumption when a graph cannot be fully represented as a Cartesian product of factor graphs.
* We clarified the methodology used for tuning and analyzing the hyperparameter of CITRUS.
* We added additional theoretical and experimental comparisons with the most relevant baselines and spectral GNNs to gain more insights into the advantages of our method.
* We clarified the main intuition behind using Cartesian product graphs to model time-varying spatiotemporal time series.
* We outlined the ability of the proposed method to handle both homophily and heterophily in graphs.
* We highlighted the computational advantages of the separability of the graph filters.
**We have uploaded a one-page PDF file and included additional results as tables and figures. We kindly invite reviewers to refer to them whenever required.**
Pdf: /pdf/2dbe9777ef4c8128c1cdf2b1c1817d4093c5aafa.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Fine-Tuning is Fine, if Calibrated | Accept (poster) | Summary: The paper proposes a simple post-training calibration technique for classifying missing classes after fine-tuning. For example, assuming the pre-trained model can classify 1000 classes and is fine-tuned on a subset of these classes from a different image domain, the proposed method improves the classification accuracy of the *absent classes* on this new domain. Specifically, the proposed method adds a calibration hyper-parameter to artificially boost the probability of predicting the absent classes. To motivate the method, the paper investigates the quality of feature learning using the Nearest Class Mean classifier to isolate the cause of bad performance on the absent classes. While simple, the method shows good performance gain on multiple datasets.
Strengths: * The proposed method is easy to implement and provides good performance gains.
* The paper uses the NCM classifier to investigate the feature extractor's quality and isolate the linear classifier's influence after fine-tuning. This methodology provides a clear motivation for the proposed method.
* The finding that fine-tuning does not completely destroy the features of absent classes is interesting. This insight can motivate further study, especially for improving the robustness of fine-tuning.
* The paper provides a detailed ablation study showing the proposed method's strengths and limitations.
* The extent of performance gain hinges on the fine-tuning procedure.
* The distribution of absent classes also affects performance.
Weaknesses: * **Statements are not precise**. In the abstract and introduction, the paper claims that a fine-tuned model does not forget the relationship among absent classes. However, this claim is not precise. As the paper points out, the extent of forgetting and degradation depends on the fine-tuning procedure. For example, an Adam optimizer with a larger learning rate can degrade the features of the absent class. This is consistent with the existing literature on the robustness of fine-tuning. It's possible that the proposed technique only works well under moderate changes to the pre-trained model.
* **The fine-tuning setting is limited**. The proposed method only works for a particular fine-tuning configuration under constrained assumptions. Specifically, the method assumes a classification task, and the pre-trained model can classify all fine-tuning classes, including absent ones. Therefore, the paper's claims on forgetting and feature learning are limited by its scope.
Technical Quality: 3
Clarity: 3
Questions for Authors: * While the paper investigated several factors that can affect calibration effectiveness, it is unclear how we should decide when the calibration is useful. Could the authors consider dependency on the model's **intrinsic** properties? The external factors all may lead to a common intrinsic property. For example, maybe the fine-tuned model's deviation (in the weight space) can be an indicator. For example, RMSprop and Adam are known to converge faster and potentially lead to a larger deviation from the pre-trained model?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper does not have a potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your detailed review and positive assessment of the strengths of our paper. We address your concerns as follows.
Weakness:
**W1: Statements are not precise. … The extent of forgetting and degradation depends on the fine-tuning procedure. ….**
We apologize if our statement in the abstract and introduction were not precise or clear. In our humble opinion, learning rates are hyper-parameters of optimizers, and our statement was made assuming the hyper-parameters are properly selected. As mentioned in Lines 31-38, our study was motivated by the findings in [48]. We note that in [48], SGD is the main optimizer and *the learning rate has been carefully chosen.* However, the absent class accuracy still suffers a drastic drop, and [48] viewed it as an instance of forgetting. Our statement in the abstract, *“To our surprise, we find that the fine-tuned model neither forgets the relationship among the other classes nor degrades the features to recognize these classes,”* was made against the claim by [48]. We found that the accuracy drop is mainly due to the biased logits, not the forgetting of features and class relationships for the absent classes.
We hope the paragraph above clarifies your concern.
In our humble opinion, when the learning rate is not well chosen or the optimizer is not applied properly, any observations about the machine learning model may be misleading or doubtful.
For full disclosure, we indeed (re-)learned such a lesson when we extended our study beyond the SGD optimizer used in [48] (Lines 72 - 76 and Lines 316 - 325). When we first applied the Adam optimizer, we followed the practice of setting a small learning rate (e.g., 1e-3 and 1e-4) to ensure that the training could converge (on the fine-tuning classes). Under such a setting, we saw a notable gain in the fine-tuning class accuracy but a poor NCMu/y and ACCu/u (thus a poor AUSUC), contracting our findings (Lines 42 - 64) when using the SGD optimizer. At that time, *we almost drew the conclusion that our finding is optimizer-specific.* However, by further reducing the learning rate, we once again saw a decent NCMu/y and ACCu/u and a similar gain in AUSUC as using SGD (see Figure 9). This motivates us to conduct a comprehensive analysis, exploring a wide range of learning rates across six optimizers.
**Q1: It's possible that the proposed technique only works well under moderate changes to the pre-trained model.** We appreciate your question. As our goal is to preserve (and even improve) the discriminative ability on the absent classes from the pre-training model to the fine-tuning models, it is reasonable that the change to the pre-trained model cannot go arbitrarily large. With that being said, the increase in AUSUC in Figure 9 (e.g., from 0.5 to 0.63) suggests that the fine-tuned model has undergone a sufficient change to improve its performance in the downstream domain.
**Q2. The fine-tuning setting is limited.**
We acknowledge that our scope is limited to classification tasks. However, we respectfully think it is a limitation but not necessarily a weakness. In our humble opinion, many advances in machine learning start from the investigation of classification problems (e.g., AlexNet and self-supervised learning) and gradually extend to other tasks. While we work on a particular fine-tuning configuration with a subset of classes, we respectfully think it is a practical setting (with access to large pre-trained models like CLIP but limited fine-tuning data) that deserves deeper exploration. With this in mind, we conducted a systematic, extensive analysis on a focused topic. While we agree that exploring other tasks could broaden the applicability of our findings, the focused scope of our current study allows us to thoroughly investigate and validate our claims.
**Q1: Dependency on the model's intrinsic properties.**
Thank you for the insightful question. As mentioned in Lines 85 - 98 and Lines 253 - 257, the calibration method works when the drop in the absent class accuracy primarily comes from the biased logits, not the degradation of the feature extractor or the relationship among absent classes.
We follow your suggestion to explore the model's intrinsic properties. Since the parameter space of the model is of extremely high dimensionality, it is hard to use the L2 distance between the pre-trained and fine-tuned models to quantify the model deviation. We thus explore a different measurement.
We calculated the linear Centered Kernel Alignment (CKA) in Appendix C.5 (Lines 733-737) and found that the CKA of the absent classes’ linear classifiers (between the pre-trained and fine-tuned models) correlates well with performance after calibration. As shown in Figure R.1 of our rebuttal PDF, the Unseen CKA across different optimizers follows a similar pattern to their Area Under the Seen-Unseen Curve (AUSUC) in Figure 9. Specifically, when the Unseen CKA falls below a certain threshold (e.g., 0.98), there is a corresponding drastic drop in AUSUC. This suggests that the CKA can be a useful indicator of calibration effectiveness. We will complete and include this study in the camera-ready version.
---
Rebuttal Comment 1.1:
Title: Kindly request your response
Comment: Dear Reviewer YzGv,
We appreciate your valuable comments on our paper. We have prepared a rebuttal (together with a general response to all reviewers) and tried our best to address most if not all of your concerns. We notice that the author-reviewer discussion period is coming to an end, and we are willing to answer any unresolved or further questions that you may have regarding our rebuttal if time is allowed.
If our rebuttal has addressed your concerns, we would appreciate it if you would be willing to consider raising your original rating. Thank you for your consideration.
Best,
Authors | Summary: The authors argue that fine-tuning doesn’t forget the features for classes not participating in it, but rather downscales their logits as a result of which the model ends up being overconfident for the fine-tuning classes. Counter-intuitively, the authors claim that fine-tuning also enhances the discriminative ability of the model for the classes not participating in fine-tuning. For this, the authors analyze that accuracy of the NCM classifier on the features of the model and show that it increases even for classes absent during fine-tuning. Then the authors show that the order of the absent classes is still in place amongst the set of absent classes, but the model becomes overconfident on the classes used for fine-tuning, thereby leading to drop in accuracy of absent classes. To fix this, the authors analyze simple post processing calibration methods and demonstrate recovery in the performance of absent classes, with some drop in performance of fine-tuning classes. The analysis is done on Imagenet-R, VTAB and Office-Home datasets.
Strengths: 1) The gains observed on calibrating the model are impressive, and it is interesting to see that merely by calibrating the model, the model’s performance on absent classes can improve to this extent.
2) The motivation behind the post hoc calibration method is interesting and it is unexpected that just the confidence of the absent would lower down while preserving their relative order on performing fine-tuning.
Weaknesses: 1) An increase in NCM classifier’s accuracy need not necessarily mean that the model has become better in discriminating features. It only means that the features become closer to the corresponding class mean in l2 distance metric space. I think this argument is not concrete enough and requires more evidence.
2) It is not clear, why a drop in accuracy is seen in the fine-tuning classes on using PVC as the post calibration method. Even in case of ALG, where there isn’t a significant drop, the accuracy of classes absent during fine-tuning is still significantly lower (e.g. on office home it drops by over 20%) than the pre-trained model. This suggests that authors claim on feature enhancement of absent classes might not be true.
3) The authors propose to use training data / validation data to find the right threshold. This would mean access to the model as well as data. In such a scenario someone could rather easily fine-tune the model on absent classes as well, which would not require a lot of compute. Therefore, on a practical standpoint, it is not completely obvious how the proposed method benefits more than mere fine-tuning. It would be great if authors could compare the budget required for fine-tuning on absent classes vs post-hoc calibration to address this point to achieve similar performance. Although I agree that the observation itself is interesting, but analyzing the efficiency aspect could help the authors in making the claim on using post calibration methods stronger.
4) I think that the analysis shown in Fig-9 needs more rigor. Further, I think the authors need not discuss this in the main paper, as it doesn't adds much. The learning rate used for different optimizers are same, but the learning rates for adam, adagrad and adagrad need to be scaled down to make a fair comparison with SGD.
5) The analysis on why absent class features improve on fine-tuning is not rigorous and would encourage authors to not discuss this in the main paper. Most of the claims in this analysis seem low hanging and not sound enough. This hinders the readily of the paper currently.
6) Similarly, figure-10 doesn't adds much to the storyline authors have presented in this work and it requires more rigor. I would suggest them to restructure the paper a bit and remove this analysis from the main paper.
Minor comments:
In figure-5, the y-axis should be between 0-1 since it represents a probability.
I would be happy to increase my scores if my concerns are sufficiently addressed.
Technical Quality: 2
Clarity: 1
Questions for Authors: It would be great if the authors can present their results on domainnet dataset [1].
I request the authors to kindly address the questions in the weaknesses section.
[1] https://paperswithcode.com/dataset/domainnet
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 3
Limitations: Yes, the authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments. We are pleased that the reviewer recognize several key strengths of our paper. We respond to your concerns (weaknesses and questions) as follows.
**W1. NCM classifier’s accuracy and discriminating features.**
Thank you for the comment. Our paper focuses on classification problems. Thus, we consider improvement in *classification accuracies* as one critical metric to assess the *features’ discriminative abilities*. We note that increasing the NCM accuracy requires the features of each class not only to be close to the corresponding class but also to be far away from other classes, meaning that the class means need to be separated apart as well. These properties align with linear discriminative analysis and, in our humble opinion, are what we expect discriminative features to possess.
Besides NCM, we note that in section 4.3 ( specifically Figure 4), the fine-tuned model’s accuracy among absent classes also increases. We view it as another evidence that the features’ discriminative ability improves.
With that being said, we are aware that NCM is not the only way to assess feature quality, and we will be happy to include further analyses.
**W2: Accuracy by PCV and ALG.**
We respectfully think there might be a misunderstanding. Our main finding in the paper is that with proper calibration of the logits, the fine-tuned model (red $\star$ in Figure 2) can regain and even increase its accuracy in the absent classes (the y-axis value along the red curve can surpass the green $\star$), suggesting that its features on absent classes do not degrade but often improve. (Please also see our remark in Lines 94-98.)
We note that the post-processing calibration factor $\gamma$ is applied after the model has been fine-tuned. In other words, the accuracy drop in either the fine-tuning or absent classes (as the reviewer found) may result from sub-optimal selections of $\gamma$ and have nothing to do with feature qualities. Indeed, when $\gamma$ is not properly set, either the fine-tuning class accuracy or the absent class accuracy can drop drastically, to zero, corresponding to the two endpoints of the red curves in Figure 2 and Figure 7.
PCV and ALG are two approaches to selecting $\gamma$, but we certainly do not claim they are optimal. In Table 1, we can see that “Fine-tuning + $\gamma^\star$” could achieve a much better balance of ACCu/y and ACCs/y than “Fine-tuning + $\gamma$PCV” and “Fine-tuning + $\gamma$ALG.” We note that “Fine-tuning + $\gamma^\star$” outperforms the pre-trained model in ACCu/y and ACCs/y, indicating the fine-tuned models have enhanced accuracy in both categories, if calibrated properly. We leave a better, more robust approach to select $\gamma$ in our future work.
**W3: Training data/validation data to find the right threshold.** We respectfully think there might be a misunderstanding. We apologize if we did not describe the setting clearly and we will certainly improve it.
As mentioned in Section 5 of the main paper, “Pseudo cross-validation (PCV) partitions D_{tr} into **pseudo**-fine-tuning and **pseudo**-absent classes and finds $\gamma$ that can balance the pseudo-fine-tuning and pseudo-absent class accuracy.” We note that D_{tr} is not the data used in pre-training (which contains absent class data), but the fine-tuning data, as defined in section 3 (Lines 150 - 155). The **pseudo**-fine-tuning and **pseudo**-absent classes are both from the fine-tuning classes; no absent class data are exposed to PCV. (More details can be found in section B.2 of the appendix.)
**W4: Figure 9 and learning rates.**
We will improve Figure 9 to make it easier to read. We respectfully think that our study in section 6 (Lines 316 - 325) is valuable and essential, as we want to know whether our findings in sections 4 and 5 are general or optimizer-specific. Regarding the learning rate, we note that the values along the x-axis in Figure 9 are not the exact learning rates, but the multiplying factors to the **default learning rate (LR)** of each optimizer, respectively (please see Lines 319 - 320). We obtain the default LR from the `torch.optim` and `pytorch-optimizer` packages. We presented their values in Table R.3 in the rebuttal pdf, and we will include it in the camera-ready version. We apologize that we did not make this part clear in the main paper. After all, Figure 9 is not meant to argue that SGD is better than other optimizers, but to show that our findings in sections 4 and 5 hold for different optimizers when their learning rates are properly set.
**W5 & 6: Analysis (Lines 326 - 360) and Figure 10.**
Thank you for your feedback. We apologize if these parts were not rigorous, and we will consider moving them to the Appendix. We acknowledge that the analysis is not a formal theory, and we will certainly clarify it. We note that we have conducted several other analyses to understand our findings (c.f. Line 361 - 363). Due to the space limit, we keep them in the Appendix (section C and section D.5). We will polish and condense some of them to replace the current Lines 326 - 360 in the main paper.
**W7: Figure 5.** We will adjust the y-axis. Thank you for pointing it out.
**Q1: DomainNet**
As per your suggestion, we conducted additional experiments on DomainNet. We followed setting 1 in section 4.1 to pre-trained a Resnet-50 model on the ‘Real’ domain with 345 classes and then fine-tuned it on the randomly selected 170 classes in other five domains(ClipArt, Sketch, Infograph, Painting, and Quickdraw). We include the results in the rebuttal PDF (Table R.1 and Table R.2).
We can see that average AUSUC (c.f. Lines 280 - 289 in the main paper) and ACCu/y both increase after fine-tuning, even though absent classes have not been involved in the fine-tuning process, which is consistent with our findings from other datasets. We will include the complete results and discussions in the camera-ready version.
---
Rebuttal Comment 1.1:
Title: Kindly request your response
Comment: Dear Reviewer 5T9R,
We appreciate your valuable comments on our paper. We have prepared a rebuttal (together with a general response to all reviewers) and tried our best to address most if not all of your concerns. We notice that the author-reviewer discussion period is coming to an end, and we are willing to answer any unresolved or further questions that you may have regarding our rebuttal if time is allowed.
If our rebuttal has addressed your concerns, we would appreciate it if you would be willing to consider raising your original rating. Thank you for your consideration.
Best,
Authors
---
Rebuttal Comment 1.2:
Comment: Thanks to the authors for their rebuttal. The rebuttal mostly addresses my concerns and therefore, I will increase my score.
---
Reply to Comment 1.2.1:
Title: Re: Official Comment by Reviewer 5T9R
Comment: We are glad that our rebuttal has addressed most of your concerns and you are willing to increase the score. We will incorporate the rebuttal into our final version. Thanks. | Summary: It is commonly believed that fine-tuning zero-shot models on seen classes will lead to a decrease in performance on unseen classes.
In this paper, the authors systematically examine the issue that find that (1) the fine-tuned feature extractor is not damaged: NCM improves the absent class accuracy without catastrophic forgettin (2) main factor that damages the FT model’s ability to correctly classify absent class examples is the biased logit values towards fine-tuning classes. (3) a simple post-processing calibration of logits, ie, offsets the seen classes could bring back the zero-shot performance of absent classes. Extensive experiments and analyses validate the claims of the authors.
Strengths: The paper is well-organized and the presentation is clear. The analyses are comprehensive and convincing. This study provides insights that corrected my previous viewpoint that fine-tuning causes the forgetting of knowledge of absent classes. This paper is undoubtedly a valuable work and worth accepting.
Weaknesses: This is a solid paper, and I did not find any major weaknesses.
I suggest that the authors include more experiments on CLIP models, such as the base-to-new setting in CoCoOp, reporting AccY/Y, AccS/Y, and AccU/Y on 11 datasets.
Technical Quality: 4
Clarity: 4
Questions for Authors: Please refer to the weaknesses.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: No negative societal impact has been identified.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive feedback. This project has been a challenging journey, yet rewarding. We respond to your valuable comment below.
**W: “I suggest that the authors include more experiments on CLIP models, …”**
Thank you for the suggestion. We will certainly include more experiments and discussions on CLIP models. Compared to conventional classification models that *learn for each class a linear classifier* on top of the feature extractor, CLIP models *learn a shared “text encoder”* that can generate linear classifiers (i.e., text embeddings) given class names or descriptions. We surmise that such a shared encoder would facilitate fine-tuning with a subset of classes. Concretely, instead of directly updating the linear classifiers as in conventional classification models, fine-tuning a CLIP model would update the “text encoder” that generates the linear classifiers. Given only the fine-tuning data from a subset of classes, the CLIP model has the potential to learn their common properties (e.g., domain specifics) and transfer them to the absent classes through the shared encoder. As a result, the discrepant logit scales might be reduced. The prompting capability of CLIP models further enables new approaches like CoCoOp to adapt the model without adjusting its parameters. We will perform a detailed and systematic study on fine-tuning with CLIP models, considering both full fine-tuning and CoCoOp and reporting ACCy/y, ACCs/y, and ACCu/y. (We note that our fine-tuning classes correspond to CoCoOp’s base classes; our absent classes align with its new classes.) We will also investigate whether calibration is compatible with CLIP models to further boost the fine-tuning performance.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. I am interested in reproducing the results presented in your paper, but I noticed that your code was not provided and the code of the baseline [1] is not publicly available. Could you let me know if there are any plans to release the code, and if so, when it might be available?
Best,
[1] holistic transfer: towards non-disruptive fine-tuning with partial target data
---
Reply to Comment 1.1.1:
Title: Re: Official Comment by Reviewer 9Ei3
Comment: Dear Reviewer 9Ei3,
We appreciate your interest in our study and in reproducing the results presented in our paper. We understand the importance of reproducibility in research and are committed to supporting it. We plan to release our code along with the camera-ready version of the paper. We appreciate your understanding and patience.
Best,
Authors | Summary: The paper unveils the improved features of absent classes when a pre-trained model is fine-tuned on a subset of all classes. The paper presents an empirical study on three datasets to demonstrate this finding and proposes a calibration method to post-process the logits after fine-tuning to improve the classification result in absent classes. The reason why the absent classes are improved is analyzed and the effectiveness of the calibration method is supported by experimental results.
Strengths: 1. The finding on the improved performance of absent classes is interesting.
2. The proposed calibration method is simple and easy to use.
3. The presentation of the paper is clear.
4. The empirical results in the appendix are extensive.
Weaknesses: 1. The reason why the absent classes' features are improved is demonstrated to be that the fine-tuned classes have similar features as absent classes. That suggests that the improvement does not always hold when the fine-tuned classes have features that are not helpful or even harmful (e.g., spurious correlation) to absent classes. This should be discussed further in the submission.
2. In some figures it looks like Tu et. al. achieves the best trade-off, while the proposed scaling method is only presented with a line. It is probably better to show the performance of the two $\gamma$ selection methods in the figure.
Technical Quality: 3
Clarity: 3
Questions for Authors: NA
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive assessment and valuable feedback on our work.
**Q1: … the improvement does not always hold when the fine-tuned classes have features that are not helpful or even harmful ...**
Thank you for your insightful question, and we will include more discussions in our camera-ready version.
In our study, while not explicitly mentioned, we suppose the pre-trained model has learned a decent discriminative ability and faithful similarity among classes (in the pre-training domain), and we aim to preserve and even improve these properties (in the downstream domain) after fine-tuning with a subset of classes.
Our analysis in Section 6 (Lines 326 - 360) explains why the absent class features could improve after fine-tuning. *We note that we certainly did not claim that the improvement would always happen under any conditions.* For instance, in Lines 336 - 339, we pointed out one necessary condition. Essentially, if the domain shift affects similar classes *differently* — for example, huge domain shifts break the similarity learned in the pre-trained model — then the improvement would unlikely hold. We will extend this discussion to incorporate non-helpful or harmful features. For example, if the domain shift introduces spurious correlations so that dissimilar classes (e.g., some fine-tuning and absent classes in the pre-training domains) have similar features in the downstream domain, then the features after fine-tuning might become misleading.
After all, just like domain adaptation techniques typically degrade when there is a huge discrepancy between the source and target domains, we think it is reasonable that our findings or proposed approach may not work in some (extreme) conditions. We will surely extend the discussion so that readers or future users can get a better understanding of when the improvement may or may not hold.
**Q2: … it looks like Tu et. al. achieves the best trade-off, while the proposed scaling method is only presented with a line. …**
Thank you for the suggestion and we will include the two $\gamma$ selection methods in the figures where appropriate.
Meanwhile, we want to reiterate the *difference* and *compatibility* between Tu’s method and our calibration method. We note that Tu’s method is a fine-tuning approach that updates the pre-trained model, and in our paper, we compare it to no fine-tuning and conventional full fine-tuning. The three $\star$ in Figure 2 and Figure 7 of the main paper correspond to them: black for Tu’s method; green for no fine-tuning; and red for conventional full fine-tuning. (All of them are without calibration yet.)
In contrast, the calibration method with $\gamma$ is to adjust the strengths of the logits of the fine-tuning and absent classes, and it can be applied as post-processing to all the aforementioned models: it creates the three Seen-Unseen Accuracy Curves (AUSUCs) in Figure 7.
In our paper, our main finding is that with calibration, the conventionally fully fine-tuned model (red) can regain its Absent class accuracy and can potentially achieve a better balance of Fine-tuning and Absent class accuracies than Tu’s method (black), as evidenced by that the red curves surpass the black curves in many of the figures (c.f., Figure 7, Figure U, Figure V, and Figure W).
We appreciate your detailed observations in several figures in Figure V, where the black $\star$ (Tu’s method, with more complex training than full fine-tuning) seems to surpass the red curves or achieve the best trade-off that one can obtain on the red curves. We will certainly add the selected $\gamma$ on the red curves. Besides that, we note that the calibration method can be used to improve Tu’s method as well, as shown in Figure U and Figure W. With an appropriate $\gamma$, we could obtain a better trade-off along the black curves than the black $\star$.
---
Rebuttal Comment 1.1:
Title: Kindly request your response
Comment: Dear Reviewer amJX,
We appreciate your valuable comments on our paper. We have prepared a rebuttal (together with a general response to all reviewers) and tried our best to address most if not all of your concerns. We notice that the author-reviewer discussion period is coming to an end, and we are willing to answer any unresolved or further questions that you may have regarding our rebuttal if time is allowed.
If our rebuttal has addressed your concerns, we would appreciate it if you would be willing to consider raising your original rating. Thank you for your consideration.
Best,
Authors | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable comments. We are glad that the reviewers found our findings and motivations “interesting” (amJX, 5T9R, YzGv), our study providing “insights” correcting prior belief (9Ei3), our solution “simple and easy to use” (amJX, YzGv) with “impressive” gains (5T9R), our empirical results “extensive” (amJX, 9Ei3, YzGv), our analyses “comprehensive and convincing” (9Ei3, YzGv), and our presentation “clear” and “well-organized” (amJX, 9Ei3). Reviewer 9Ei3 further said, “This paper is undoubtedly a valuable work and worth accepting.” Reviewer YzGv also said, “This insight can motivate further study, especially for improving the robustness of fine-tuning.”
We have tried our best to address most if not all of the concerns, and we will modify our camera-ready version accordingly to incorporate all the feedback.
Pdf: /pdf/e36914f4cd1b9d250d5f5ddfcb3ca76d90559733.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Probabilistic Decomposed Linear Dynamical Systems for Robust Discovery of Latent Neural Dynamics | Accept (poster) | Summary: This paper develops a novel model of latent neural dynamics that builds on existing work in switching and time-varying linear state space models. In particular, the authors propose a full probabilistic formulation of decomposed LDS models and provide a variational EM algorithm to do inference in these models. They apply their method to various synthetic and real neural datasets, showing improved performance over the primary time-varying linear alternative methods.
Strengths: Overall, the proposed model and method are original and are a positive contribution to the modeling of neural dynamics. The authors correctly point out shortcomings of both the SLDS and dLDS approaches, and develop a new model that addresses shortcomings. In particular, the probabilistic formulation of dLDS dramatically improves the models performance, and the flexibility given by the time-varying linear combination of DOs helps p-dLDS perform well in situations that the SLDS is not well suited for. The evaluation is high-quality, as the paper compares relevant methods in four different tasks. Finally, the approach is well-motivated, and more often than not the methods and model are clearly described.
Weaknesses: The model and fitting method have two added complexities that may make it harder than needed to reliably fit the model parameters and make inferences about the data. In particular, the update step sparsity coefficients requires a multi-step approximate procedure and the time-varying offset term depends on a hyperparameter that can take on a wide number of values. These points are discussed in more detail in the questions.
Technical Quality: 4
Clarity: 4
Questions for Authors: - The approach for determining `b_t` via a moving average accomplishes the goals of learning a changing fixed point. However, it appears to be suboptimal for cases where the fixed point can change both rapidly or slowly (i.e. across different timescales) because of the fixed window size. Additionally, the requirement to search over a wide number of window sizes pose schallenges for optimization. I'd encourage the authors to consider other approaches. Could the bias term also be determined via a sum over a dictionary with shared coefficients `c`?
- Relatedly, can the authors provide more details of the hyperparameter selection scheme for the window size? The paper states that for the Lorenz experiment, the window size was sampled uniformly from 2 to the length of the time series. Does the procedure follow the dLDS hyperparameter selection, in which 1000 random choices of `S` and `\xi` were generated and a separate model was fit given each setting of hyperparameters? What was the optimal offset values on the real datasets, and does this relate to the timescales of the p-dLDS coefficient switching?
- The parameterization and update step for the sparsity coefficients appears to work well on the simulated and real datasets. However, it does require a multistep approximate procedure for the update. I'm wondering if the authors have considered simplifications to the model that may allow for using simpler inference in this step while still achieving the goals of learning sparse coefficients? For example, does it work to model each $\gamma_t$ independently? This appears to be very similar to the original SBL-DF algorithm and the first step of the sparsity coefficient update which initializes $q(c) q(\gamma)$ using SBL-DF. How much improved performance does the proposed model & methods have relative to this baseline?
- In the reaching experiment, the reach angles are classified using the discrete states or p-dLDS coefficients and not the continuous states. However, I would expect classification using the continuous states to outperform this in both cases. I would suggest the authors further justify that comparison in the text and compare to classification based on the continuous states.
Minor comments
- It appears that eq 5 describes the joint distribution $p(c_t, \gamma_t | c_{t-1})$, not just $p(c_t \mid c_{t-1})$.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors address the limitations in the text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer mAN7 for their thoughtful feedback and suggested improvements to our experiments. We are encouraged that they found our work to be "high-quality" and a "positive contribution to the modeling of neural dynamics".
**W1:** While p-dLDS does require more complex mathematical concepts, the main goal of this work is to develop a procedure that is easier and more reliable to work with than dLDS. Specifically, p-dLDS actually simplifies the fitting process by reducing the total number of hyperparameters for inference to two: the SBL-DF tradeoff $\xi$ and the offset window size $S$. In contrast, dLDS needs three hyperparameters ($\lambda_0$, $\lambda_1$, and $\lambda_2$) for determining the tradeoff between dynamic reconstruction, coefficient sparsity, and coefficient smoothness. In practice, manually balancing these tradeoffs between each of the Lagrange multipliers can be difficult and time-consuming. p-dLDS eliminates the need for manual tuning of these Lagrange multipliers through probabilistic inference. We estimate $\lambda_0$ with the inferred covariance $\Sigma_x$, $\lambda_1$ with $\gamma_{t,k}$'s from SBL-DF, and $\lambda_2$ with $\sigma^2_{t-1,k}$ also from SBL-DF. In practice, we find that our model is much easier to tune compared to the original dLDS model. As demonstrated in the results presented here, the outcomes are more accurate and robust for p-dLDS than dLDS.
**Q1:** We agree that our approach for the offset may be limited when the fixed point changes across different timescales. We did consider a cost-based dictionary learning approach for the offset term early on in our work, but found it challenging to prevent the learned dynamics from collapsing to the degenerate scenario described in Lemma 1. Often, our offset dictionary would span the same subspace as the direction of the dynamics, preventing the learning of meaningful structure in the DOs.
Moreover, this approach introduces additional complexity with multiple hyperparameters (e.g. for dictionary size, coefficient structure, and more) and requires an expensive iterative solver to infer offset coefficients, further slowing down training. In contrast, our moving average approach is simple, requiring only a single hyperparameter (the window size), and can be efficiently computed in a single pass.
**Q2:** Yes, the p-dLDS hyperparameter selection procedure is very similar to the dLDS one. We use random search with a budget of 1000 samples to determine the values of $S$ and $\xi$ and fit a separate model for each set of hyperparameters. In our search, we uniformly sample an integer from 2 to the length of the time series $T$. For the real dataset, the optimal offset is $S=76$ which is smaller than the timescale of the p-dLDS coefficient switching (around 150 time points). This suggests that the same DO dynamics may persist even as the fixed point of the system fluctuates throughout the experiment. We appreciate the reviewer's suggestion for clarification and will include these additional details about the hyperparameters in the supplementary materials and the camera-ready version if accepted.
**Q3:** Great question! While we did not directly explore modeling $\gamma_t$'s independently, previous works [1,2,3] have shown that the introduction of a dynamically informed prior leads to lower error and faster convergence when compared to their independent counterparts in both probabilistic and cost-based inference approaches. In particular, Figures 8 and 10 in [1] demonstrates that SBL-DF (with dynamically informed $\gamma_t$'s) shows an improvement in error and convergence time over the static SBL (with independent $\gamma_t$'s). Additionally, we highlight that our inference procedure is computationally efficient, and significantly reduces the training time required for decomposed models (see additional pdf for runtime experiments).
[1] O’Shaughnessy et al. "Sparse Bayesian learning with dynamic filtering for inference of time-varying sparse signals." 2019.
[2] Charles et al. "Dynamic Filtering of Time-Varying Sparse Signals via $\ell _1 $ Minimization." 2016.
[3] Charles et al. "Convergence of basis pursuit de-noising with dynamic filtering." 2014.
**Q4:** Thank you for your suggestions! We originally did not consider using the continuous latent state to classify because we wanted to show that the learned dynamics contained information about the reach angle. However, following your suggestion, we found that the classification accuracy based on the continuous latent state $x_t$ outperformed the classification accuracy from previous experiments in all models. Similar to our previous classification set up, features are generated from the continuous latent states by averaging $x_t$'s over all points in time. We highlight that classification based on the continuous states improved p-dLDS's top-1 accuracy to 57.60\% and top-3 accuracy to 94.87\% (see additional pdf).
**Minor.** Thank you for your careful review! We will update these typos.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thoughtful and informative responses to my questions and for exploring classifying based on the continuous state for the reach experiment. I think it be worthwhile to incorporate these points into the paper.
I remain convinced that this paper should be accepted. I intend to keep my score the same, and I encourage the other reviewers who scored lower than I did to consider raising their score. | Summary: The Authors propose a probabilistic extension to the "Decomposed Linear Dynamical Systems (dDLS)" method by Mudrik et al. (2024). The proposed p-dLDS belongs to a family of models specifically designed to describe neural activity from high-dimensional dynamical data. The effectiveness of p-dLDS is evaluated on a set of synthetic benchmarks and a real dataset.
As the primary use-case of p-dLDS -- modeling neural data -- is not my expertise, my evaluation capability for this paper is quite limited, as reflected in my confidence assessment.
Strengths: On top of giving accurate predictions of future states, the proposed method recovers a sensible switching behavior for every one of the examples reported.
Weaknesses: - The experimental results do not report the variance across independent runs. As most of the datasets are synthetic, these should be fairly straightforward to produce.
- Apart from adding a probabilistic structure to the sparse coefficients of dLDS, p-dLDS adds a slow-fast decomposition of the latent state (Eq. 4). Yet, no ablation study assessing the importance of this modeling choice is reported.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What is the running time of the different baselines? What is the computational complexity of p-dLDS?
- From the experimental evaluation, I can see that dLDS and p-dLDS behave quite differently. Have you observed any relationship between the posterior mean of the coefficients $\mathbf{c}_{t}$ in p-dLDS and the deterministic analog in dLDS?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N / A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 3hfN for taking their time to review our work.
**W1:** We report the standard deviation across independent runs in Tables 3, 4 and 5 in the supplementary materials. We omit these values in the main text due to limited space.
**W2:** While we do not perform an ablation study for the offset term, we discuss theoretically in section 3.1 why the lack of an offset term in dLDS leads to a representation that does not generalize in systems that contain multiple fixed points. Namely, that any parameter setting in the original dLDS dynamics model reduces to a linear dynamical system (LDS) which is characterized by a single fixed point centered around the origin. The Lorenz experiment (section 4.1) illustrates how this limitation leads to a representation that does not align with the attractor lobes, inappropriately segments the system radially with respect to the origin (Fig. 2D), and leads to decreased performance in our quantitative metrics (Table 1). In contrast, we show that the offset term in p-dLDS alleviates these problems and enables the learning of a representation that aligns with the multiple fixed points of the Lorenz attractor.
**Q1:** We include additional runtime experiments that sweep across a range of dictionary sizes and latent dimensions (see attached pdf). We observe that in general, p-dLDS is significantly faster than dLDS, and even matches rSLDS and SLDS in certain parameter settings. Our approach adopts the worst-case computational complexity of the SBL inference procedure which is $\mathcal{O}(TK^3)$ in time and $\mathcal{O}(TK^2)$ in memory due to a matrix inversion required to compute the posterior coefficient covariance [1]. However, as noted by [2], the introduction of an dynamically informed hyperprior reduces the number of overall iterations required for convergence and typically reduces the actual runtime by an order of magnitude.
[1] Tipping. "Sparse Bayesian learning and the relevance vector machine." 2001.
[2] O’Shaughnessy et al. "Sparse Bayesian learning with dynamic filtering for inference of time-varying sparse signals." 2019.
**Q2:** In general, we observe that the posterior mean of the coefficients in p-dLDS will be different from the deterministic analog in dLDS. In Figures 2B, 2D, 4C and 4F, we illustrate these differences by plotting the inferred coefficients from both models side by side. We find that without the probabilistic inference procedure and the time-varying offset term, the inferred coefficients of dLDS can produce switching patterns that may not align with the ground truth (Figures 2B and 2D) and can oscillate unpredictably (Figures 4C and 4F). | Summary: The paper proposes the probabilistic decomposed linear dynamical systems (p-dLDS) model, which extends the existing dLDS model. With the time-varying offset term and probabilistic formulation, p-dLDS improves upon the dLDS model in terms of robustness to noise and the ability to capture systems that orbit multiple fixed points. The authors show the advantage of the p-dLDS model over SLDS and dLDS through simulated and real datasets.
Strengths: - Clarity: The paper is presented clearly, with easy-to-understand figures and descriptions for the model, inference procedure, and experimental results. The background and related work sections are also well-written, making the paper easy to understand for readers who are not experts in the field.
- Reproducibility: The paper includes code and hyperparameter settings to reproduce its experiments, which is crucial for the ML community.
- Significance: The probabilistic extension, in addition to the time-varying offset term, adds value to the recently developed dLDS, which suffers from dynamics noise.
Weaknesses: - Experiments
Although existing experimental results are exciting, I have a couple of suggestions.
One is that, while the paper claims the robustness of p-dLDS to dynamics noise, it does not have experiments that show how robust p-dLDS is to noise. In other words, as you sweep the experiments with different dynamics noises, when do SLDS, dLDS, and p-dLDS show similar/different results? When does p-dLDS fail? Showing the advantage of p-dLDS over other models in a realistic setting of dynamics noise would be important.
In addition, the experiments set the number of DOs and discrete states the same for p-dLDS and rSLDS (e.g., Section 4.2). For a fair comparison, I think that the number of discrete states for rSLDS should be set separately on its own via cross-validation. It could be that rSLDS needs more discrete states to perform as well as p-dLDS on e.g., linear classification of reach directions.
- Interpretation of discrete states of p-dLDS
To my understanding, the paper does not explain how the learned p-dLDS parameters are used to segment the data to infer "discrete" states, as in Figures 2 and 4. In addition, for the NASCAR experiment, p-dLDS leads to fewer segmentations than four true segments. I am unsure whether we could say that this representation is more "parsimonious" while it seems like it leads to incorrect segmentations. In the Lorenz attractor experiment, it also seems like the rSLDS segmentation is more interpretable than p-dLDS segmentation.
Technical Quality: 3
Clarity: 3
Questions for Authors: - When does the model break due to the inclusion of time-varying offset terms? In other words, are there cases or hyperparameter settings when the model fails? How should the users avoid this? Are the covariances in line 158 learned or user-specified?
- The paper resorts to using PCA to determine the dimensionality of the model. Is there a reason why the dimensionality is not chosen via cross-validation?
- Minor: In Section E.1 Figure 6 panel A, it seems like the ELBO peaks at the first few iterations, then drops, and finally converges to a value smaller than the peak. I wonder if there's an explanation for this.
- How many hyperparameters are there for p-dLDS? How are these chosen, and how simple or complex is the model selection process for p-dLDS compared to the hyperparameter sweep for rSLDS?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - As the authors noted, one limitation of p-dLDS is that it assumes Gaussian observation noise.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful reading of our submission and are encouraged that you find our work to be "well-written" and "adds value to the recently developed dLDS".
**W1. Sweeping dynamics noise. Advantage of p-dLDS in a realistic setting.**
We agree that it's important to understand our model's behavior under different noise conditions. In our experiments, we focus on covering a range of neurally plausible sources of noise and demonstrate that p-dLDS improves significantly over existing models on a variety of metrics. Moreover, we emphasize that our clinical neurophysiological experiment showcases exactly “the advantage of p-dLDS over other models in a realistic setting”. We use real data with real noise (e.g. noisy trial structure, unexpected head movements, etc.), and demonstrate that p-dLDS can capture meaningful structure where other models cannot.
As discussed in our introduction, dynamical noise can arise from numerous sources, making it challenging to thoroughly explore all scenarios within the scope of this paper. Without a specific end application in mind, it is difficult to construct a experiment that fully addresses this point.
**W2. Selecting number of discrete states for rSLDS via cross validation.**
Great point! In our original experiments, we focus on approximately matching the model parameters to control for the model complexity. However, we include an additional experiment where we sweep the number of discrete states for rSLDS (see additional pdf). While more discrete states did improve rSLDS performance, it still underperforms with respect to p-dLDS at a lower parameter count, highlighting the inherent limitations of using a switched formulation to model a continuum of signals.
**W3. How are "discrete" states defined for pdLDS? Parsimonious and interpretable segmentations.**
As noted in the captions of Figures 2B and 2D, “discrete” states for dLDS models are colored according to the “dominant coefficient” which we define as the dynamic operator (DO) state with the largest coefficient magnitude. We plan on making our definition more clear in the main text in the camera-ready version of the paper, if accepted.
For NASCAR, we respectfully disagree that pdLDS "leads to incorrect segmentations". Due to the pdLDS model formulation, the learned model components are capturing the notion that different track segments can have their movement captured by the same DO (i.e., using the same movement but backward). While this results in fewer segments than the ground truth switched model the data was generated with, it correctly identifies redundancies in the switched segments that may be important to capture in many applications. The ability to identify these redundancies is a key advantage of pdLDS over switched models. While you certainly may want to distinguish these cases in a specific application, this is easily done by looking at the sign of the coefficients. We emphasize here again that pdLDS recovers the track segments despite noise (unlike other methods), leading to a variety of quantitative performance improvements.
In the Lorenz experiment, while rSLDS segmentations may seem simpler and more interpretable, it inappropriately under-segments the true system by assigning the same state to both fast and slow segments, which loses important dynamics information about speed. This behavior can be problematic for signals that vary across a continuum, such as those from the reaching experiment. Moreover, our quantitative assessment (Table 1) shows that p-dLDS's representation leads to improved performance across various metrics, supporting its value despite apparent complexity.
**Q1:** While we agree that understanding the limitations of the time-varying offset term is important, fully characterizing the model's failure modes without a specific application in mind is challenging. There are numerous ways to study when the model breaks due to the offset terms, and the most relevant ones depend on the specific use case.
Regarding the covariances, lemma 2 allows users to implicitly specify the sum of both covariances $\Sigma_l + \Sigma_b$ through the selection of the smoothing window hyperparameter $S$. Given a particular $S$, our fitting procedure estimates the value of the covariances $\Sigma_l + \Sigma_b$.
**Q2:** We use PCA to determine the size of the latent space due to its simplicity, computational efficiency, and ability to control for parameter count across different models. In computational neuroscience, PCA is a widely used and well established method that has provided numerous scientific insights [citations available upon request]. In contrast, cross-validation can be costly and may select different latent space sizes for different models, which can confound differences due to parameter count with differences in the latent dynamics model. By selecting the latent dimension across all models using PCA, we control for model complexity and can more directly compare their learned representations.
**Q3:** We believe this results from the training dynamics introduced by using SGD and Momentum, combined with the challenging non-convex optimization landscape defined by our ELBO objective. Generally speaking, variational inference is only guaranteed to converge to a local optimum.
**Q4:** Hyperparameters (HPs) include $M$, $N$, and $K$ for model dimensionality; $\xi$ and $S$ for coefficient inference; and any SGD HPs (e.g., learning rate, momentum). In general, HPs can be estimated using either domain knowledge or cross-validation. While p-dLDS has a more complex parameter sweep than rSLDS, it is easier to fit than dLDS since it does not require the manual balancing of the BPDN-DF Lagrange multipliers. This greatly simplifies the fitting process for decomposed models.
---
Rebuttal 2:
Comment: I would like to thank the authors for their response to my comments and clarifications. I would like to raise my score from 6 (weak accept) to 7 (accept). | Summary: The paper presents a probabilistic version of the dLDS model, first presented by Mudrik et al (2024). The primary impetus for this model was to present a version of the dLDS that was more robust to noise, although the authors here also included a slowly-evolving offset term meant to capture evolving fixed points. Inference is conducted by a variational objective which is optimized with a combination of ad-hoc but reasonable methods. The paper paves the way for many other probabilistic state-space models including those with non-Gaussian measurements.
Strengths: Originality - The model is original in the sense that it has not been presented previously.
Quality - The paper is mostly well written and well reasoned. I see virtually no major conceptual flaws and only minor problems. Both the simulation experiments and analysis demonstration are well conceived and of interest to the computational neuroscience community. The model is well conceived and represents a natural extension of the dLDS model.
Clarity - Very well organized, easy to follow, and a pleasure to read.
Significance - I wouldn’t exactly call this a weakness but the extension to dLDS model is so natural as to be virtually obvious. The contribution itself, while seemingly inevitable, was nonetheless executed by the authors first (props to them) and with high quality. It serves as a strong rung in what is sure to be a dLDS ladder.
Weaknesses: The primary weakness is in some technical confusion that i have the with paper. I detail below.
Technical Quality: 3
Clarity: 4
Questions for Authors: I am a bit confused about the decomposition of the coefficient prior defined on lines 168-169 following equation (5). Here the authors claim p(c_t|c_t-1, \gamma_t) is defined by a factorization N(c_t-1,\sigma_t-1)N(0,\gamma_t). The authors state that the first of these factors encourages smoothness while the second encourages sparsity. However, it is not at all clear to me that this makes sense. First, the prior is a conditional distribution over c_t for a single coefficient, so what is the factorization over exactly? Did the authors mean to present a joint distribution? Over which variables? I can see that the authors would like c_t to have both properties (sparsity, smoothness) but this particular specification doesn’t make sense. Second, \sigma_t and its inference is undefined anywhere in the paper. Third, on lines 215-216 the authors state that parameters can be learned by SGD “which is possible when we assume that the covariance matrices have diagonal structure.” Do the authors specifically mean the posterior covariances (eq 8), the prior covariances over the latent variables (line 158)? Either way, what is the limitation on SGD learning the parameters when these covariance matrices are not diagonal? Lastly, the authors state on line 187 “the approximate posterior becomes…[some Gaussians]”. It is not clear from the text if the Gaussian form of this variational posterior is a choice of the authors or if this is a variational optimum that drops out of the model structure and factorized structure of q. Can the authors please clarify?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: No major conceptual limitations but technical details should be clarified
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper thoroughly. We are encouraged that you find our work to be "original", "a pleasure to read", "well-conceived", and of "interest to the computational neuroscience community".
**Q1 and Q2: Factorization of the coefficient transition and Definition of $\sigma_t$**
We clarify that the transition density $p(c_{t,k} | c_{t-1,k}, \gamma_{t,k} )$ over the coefficients $c_t$ has the following functional form:
$$
\begin{align}
p(c_{t,k} | c_{t-1, k} , \gamma_t) &\propto \exp \left(- \frac{c_{t,k}^2 }{2\gamma_{t,k}} - \frac{(c_{t,k} - c_{t-1,k})^2} {2 \sigma_{t-1,k}^2}\right) \\\\
&= \exp \left(- \frac{c_{t,k}^2 }{2\gamma_{t,k}} \right) \exp \left( - \frac{(c_{t,k}- c_{t-1,k})^2} {2 \sigma_{t-1,k}^2}\right) \\\\
&\propto N(c_{t,k} ; 0, \gamma_{t,k}) N(c_{t,k} ; \hat{c}\_{t-1,k}, \sigma_{t-1,k}^2)
\end{align}
$$
In the first line, we propose a density that captures the constraints of sparsity and smoothness for the inferred coefficients $c_{t,k}$. A small variance around 0, $\gamma_{t,k}$, promotes sparsity while a small variance around the previous coefficient, $\sigma^2_{t-1,k}$, promotes smoothness. The second line factorizes this density into two separate terms and the third line shows that each term is proportional to a normal distribution with their respective parameters.
While the idea of combining two shrinkage effects in a single density has been explored in previous works [1,2,3,4], those approaches generally require manual balancing of the two penalties. In contrast, we estimate these parameters using SBL-DF results. Specifically, SBL-DF produces estimates for the sparsity variance $\gamma_{t,k}$ through the estimated hyperparameter posterior $q(\gamma_{t,k} )$ and the smoothness variance $\sigma^2_{t-1,k}$ through the approximate coefficient posterior $q(c_{t,k} ) = N(c_{t,k} ; \hat{c}\_{t,k} ,\sigma_{t,k}^2 )$.
We thank the reviewer for pointing out that our discussion on the coefficient transition densities could be improved and will incorporate this discussion into the camera-ready version if accepted.
[1] Irie, Kaoru. "Bayesian dynamic fused LASSO." 2019.
[2] Casella et al. "Penalized regression, standard errors, and Bayesian lassos." 2010.
[3] Li and Lin. "The Bayesian elastic net." 2010.
[4] Kakikawa et al. "Bayesian fused lasso modeling via horseshoe prior." 2023.
**Q3: SGD over covariance matrices.**
We clarify that we assume a diagonal structure for the posterior covariances. Estimating general covariance matrices $\Sigma$ is challenging both mathematically and computationally due to the requirements for a valid $\Sigma$ (e.g., symmetry, positive semidefiniteness). Vanilla SGD over individual matrix entries does not inherently respect these constraints and does not guarantee that the solution is a valid covariance matrix. Typically, solving for $\Sigma$ involves a semidefinite program (SDP) requiring specialized methods like ellipsoid or interior point solvers [1]. However, we simplify this problem by considering only when $\Sigma$ is diagonal (i.e., $\Sigma = {\rm diag} (\sigma_1 ,\dots, \sigma_n)$ ). This allows us to simplify general SDP optimization to an unconstrained problem by operating in the log space of the diagonal variance terms. We find that this approach works well in our experiments and is computationally efficient (see additional pdf for runtimes).
[1] Vandenberghe and Boyd. ``Semidefinite programming.'' SIAM review 1996.
**Q4: Derivation of the approximate posterior.**
Thank you for pointing out that equation 8 (following line 187) could be improved in clarity. It is the variational optimum that results from the model structure assumed in line 157. We derive this result below. First, we acknowledge two typos:
1. The joint distribution of pdLDS (equation 6) is missing emission densities and will be updated to,
$$ \begin{align}
p(x, y,c, \gamma | \theta ) = p(x_1) \left[ \prod_{t=1}^ T p(y_t | x_t) \right] \left[ \prod_{t=1} ^ {T-1} p(x_{t+1} | x_{t}, c_{t})\left[ \prod_{k=1}^{T} p(c_{t+1,k} | c_{t,k}, \gamma_{t+1,k}) p (\gamma_{t+1, k} | c_{t,k}) \right] \right] \tag{1}
\end{align}$$
2. The formula for the optimal coordinate ascent variational update (equation 7) should be the exponentiated log of the joint distribution [1] and will be update to,
$$
\begin{align}
q(x) \propto \exp \left( \mathbb{E}_{q(c, \gamma)} \left[ \log p(x, y, c, \gamma | \theta) \right] \right). \tag{2}
\end{align}$$
We proceed by substituting equation 1 above into equation 2, dropping out constant terms,
$$\\mathbb{E}\_{q(c, \\gamma)} \\left[ \\log p(x, y, c, \\gamma | \\theta) \\right] = \\mathbb{E}\_{q(c, \\gamma)} \\left[ \\log p(x_1) + \sum_{t=1}^ T \\log p(y_t | x_t) + \\sum_{t=1} ^ {T-1} \\log p(x_{t+1} | x_{t}, c_{t}) \\right] + {\\rm const.} \tag{3}$$
Next, our assumed decomposition in line 157 gives us that $p(x_{t+1} | x_{t}, c_t) = p(x_{t+1} = l_{t+1} + b_{t+1})$. The distribution over $x_{t+1}$ is computed as,
$$
\begin{align}
p(x_{t+1}=l_{t+1} + b_{t+1}) &= p(l_{t+1} | l_t, c_t) \star p(b_{t+1} | b_{t}) \\\\
&= N(x_{t+1} ; l_t + F_t l_t + b_t, \Sigma_x) \tag{5}
\end{align}
$$
where we have defined the convolution operator $\star$ and $\Sigma_x =\Sigma_b + \Sigma_l$. Substituting the above equations 5 and 3 into the above equation 2 gives us the optimal variational update,
$$q(x) \propto N(\mu_1, \Sigma_1) \prod_{t=1}^T N(y_t; D x_t, \Sigma_y) \prod_{t=1}^{T-1} N(x_{t+1} ; l_t + \hat{F}_t l_t + b_t, \Sigma_x)
$$
where $\hat{F}\_t$ is estimated from samples from $\hat{c}\_{t} \sim q(c,\gamma)$. We note that the additional emissions term in the above equation 1 leads to an additional term in our above equation 8.
Again, we thank the reviewer for bringing this to our attention. We plan on correcting the typos and will include this discussion into the camera-ready version if accepted.
[1] Blei et al. "Variational inference: A review for statisticians." 2017. | Rebuttal 1:
Rebuttal: Thank you to all four reviewers for their insightful feedback. We respond to reviewers individually below, but provide a description of our additional experiments here. Please see our attached pdf for figures relevant to the experiments below.
**Experiments:**
1. **Increasing the number of rSLDS discrete states in the Reach Experiment:** Reviewer M5AA suggests that the optimal number of rSLDS discrete states may differ from the one used. We sweep the number of discrete states $K$ from 4 to 20 with a step size of 2 and compare against the p-dLDS results at $K=4$. We find that although increasing the number of discrete states leads to improvements in rSLDS performance, it does not outperform p-dLDS performance at $K=4$.
2. **Training Runtime:** Reviewer 3hfN recommends that we compare the runtimes across different models. We report the time per training iteration when sweeping across the dictionary size $K$ and the latent dimensions $N$ separately from 5 to 50 with a step size of 5. For all experiments, we report training times on a single time series with length $T=1000$ and observation dimension $M=100$. When sweeping $K$, we fix $N=10$. Similarly when sweeping $N$, we fix $K=10$. We see that the p-dLDS inference procedure significantly reduces the training time required for decomposed models.
3. **Reach Classification using Continuous Latent States:** Following Reviewer mAN7's suggestion, we compared classification accuracy based on the dynamics ($z$ or $c$) to that based on the continuous states ($x$). We computed classifier features from $x_{1:T}$ by taking the average over time for each trajectory, similar to section D.5. This improved the Top-1 and Top-3 accuracies for all models, indicating that location in latent state space can be highly informative about specific reach directions. In this experiment, p-dLDS achieved Top-1 and Top-3 accuracies of 57.69\% and 94.87\% respectively.
Pdf: /pdf/fd78d5e47442e6ef6cf9889d34255bf966cf51e6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Linear Uncertainty Quantification of Graphical Model Inference | Accept (poster) | Summary: This paper proposes an uncertainty quantification method for graphical models that uses linear propagation to model uncertainty. Experiments show that it can achieve competitive results and fast convergence in downstream tasks.
Strengths: 1: The paper is well-written. The motivation for the paper and the proposed solutions are clear, making it easy for readers to follow.
2: Theoretical analysis is solid. The authors clearly explain the necessary preliminaries and provide detailed proofs.
Weaknesses: 1: Most of the datasets used for validating the properties of the proposed method are datasets with high homophily, which makes the validation less comprehensive. It would be better to conduct experiments on several datasets with high heterophily.
2: The illustrations in some of the figures are vague. For example, Figure 1 is quite inclusive but lacks some necessary explanations. The meaning of distribution Beta and the values of alpha and beta are hard to understand (though I clearly understand the calculation part).
3: The formatting of the paper is less professional, especially in the appendix. In B.3.2, there are large blanks between the title and Table 2. Also, Table 3 is oversized.
Technical Quality: 2
Clarity: 3
Questions for Authors: Can you please explain the upper part of Figure 2? I'm a bit confused about its meaning.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your detailed and constructive reviews.
## Regarding the Formatting Issues
>The formatting of the paper is less professional, especially in the appendix. In B.3.2, there are large blanks between the title and Table 2. Also, Table 3 is oversized.
Thank you for pointing out these formatting issues. We will carefully review and address all formatting concerns, especially in the appendix, and make the necessary corrections in the camera-ready version.
## Further Explanation of the Figures
>The illustrations in some of the figures are vague. For example, Figure 1 is quite inclusive but lacks some necessary explanations. The meaning of distribution Beta and the values of alpha and beta are hard to understand (though I clearly understand the calculation part).
Let's understand the meaning of the Beta distribution and its parameters in Figure 1 within the context of an active learning scenario. Consider a graph active learning task where each annotator can label a node as one of two categories. In this case, the Beta distribution can be used to model the uncertainty in the probability of a node belonging to a specific category. When considering a node individually, $\alpha$ can be interpreted as the number of times the node is labeled as category A plus one, and $\beta$ can be interpreted as the number of times the node is labeled as category B plus one. The sum $\alpha + \beta$ represents the total number of times the node has been labeled plus two; the larger this sum, the more times the node has been labeled, and thus the lower the uncertainty.
Thank you for highlighting the points that might confuse readers. We will address this in the camera-ready version by incorporating the specific scenarios mentioned above to make Figure 1 easier to understand.
>Can you please explain the upper part of Figure 2? I'm a bit confused about its meaning.
The upper part of Figure 2 illustrates, through a 3-node chain graphical model, what information we need to input if we want to quantify the uncertainty in the posterior belief of each node using LinUProp. To make this easier to understand, we can think of it as a social network.
In a simple social network with 3 users, each user can be categorized into one of two groups based on whether they are music enthusiasts or not. If we want to know the uncertainty in the posterior probability of each user being a music enthusiast, considering the influence of other users, we need to provide two types of information:
- The interval width representing the uncertainty about whether each user is a music enthusiast when considered individually (i.e., the prior interval widths $\mathbb{e}_1, \mathbb{e}_2, \mathbb{e}_3$). This could be derived from user-filled profiles. If a user hasn't specified their music preferences, we assign a wider interval width; otherwise, we assign a narrower interval width.
- The "compatibility matrix" between every two connected users. For example,
$$ \mathbf{H}_{12}=\begin{bmatrix} 0.8 & 0.2 \\\\ 0.2 & 0.8 \end{bmatrix} $$
$\mathbf{H}_{12}(i,j)$ denotes the degree of association between class i of user 1 and class j of user 2.
This matrix suggests that users 1 and 2 are more likely to either both like or both dislike music, rather than only one of them liking music.
>Most of the datasets used for validating the properties of the proposed method are datasets with high homophily, which makes the validation less comprehensive. It would be better to conduct experiments on several datasets with high heterophily.
LinUProp is a UQ method, so we expect to experimentally verify its performance in uncertainty quantification. We evaluate UQ methods in graph active learning tasks through an uncertainty-based node selection strategy because a good UQ method can prioritize nodes with high uncertainty for labeling, achieving the highest possible accuracy within a limited labeling budget. This approach is reasonable for validating UQ methods on homophily graphs, as prioritizing high-uncertainty nodes can help clarify the categories of neighboring nodes, and after propagation, quickly reduce global uncertainty, achieving higher accuracy with a lower labeling budget. However, on heterophily graphs, even if high-uncertainty nodes are prioritized for labeling, it may not help clarify the categories of neighboring nodes (especially when there are many node categories). In such cases, prioritizing high-uncertainty nodes may not lead to a rapid increase in accuracy, thus failing to appropriately reflect the performance of the UQ method.
---
Rebuttal Comment 1.1:
Title: Response to Author‘s Rebuttal
Comment: I would like to thank the authors for their detailed discussion and for addressing my questions. The explaination of the figures is essential for me to understand the paper. This is a good work. I sincerely hope the authors can further improve the paper to make it better. I've increased my score. Good luck!
---
Reply to Comment 1.1.1:
Comment: We appreciate your informative feedback, which is extremely useful to make our work better. We are also glad that our explanations are helpful. Thank you for your efforts and support! | Summary: [Edit: After discussion, the Authors have addressed my concerns on the presentation and discussion of their results. I have accordingly increased my score (previously 5).]
The paper proposes an alternative algorithm for message passing for uncertainty quantification in graphical models. The method is linear, with (presumed) computational performance gains over existing methods, less susceptibility to certain kinds of bias, and supporting theoretical justification. The method is empirically explored in simulation studies.
Strengths: Theoretical justification of the method is helpful.
Weaknesses: Some areas were hard to understand. E.g. the motivating discussion of why some (but not all) existing methods could only reduce uncertainty.
Limitations of the method have not really been addressed. It seems the proposed method is perfect!
Technical Quality: 3
Clarity: 3
Questions for Authors: I was a little confused on the comparative studies.
1) It isn't clear to me whether the proposed method produces unbiased results (though my guess is that it does not, and I note the content in section 4.2 to this end, and the claim it avoids bias in Section 7). Other methods, as discussed, do provide unbiased results, but perhaps they are very computational or have other downsides. So its ok for the proposed method to be biased if there is a substantial speed gain, for example. (*Is* there a speed gain? Actual comparisons with other methods don't seem to have been made.) Or, put another way, the method presumably gains lots by designing for linear speed. What did it lose?
2) I couldn't find anywhere which described what "BP" or "NETCONF" were (Figures 5-6). This somewhat reduces the readers ability to understand the information being presented.
3) Tables 2 and 3 (Appendix) have generously bolded the results corresponding to the papers methodology, but the +/- standard errors of the means presented strongly suggest that in many cases there is no statistically significant difference between many of the methods (bolded, or otherwise). This would seem to undermine the accuracy performance claims.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper checklist states that the limitations are outlined in the Conclusion (section 7). However, no limitations of the proposed methods seem to be discussed here. (The statement "However, ..." is not a limitation.) I would expect that the authors really ought to have some things to say here. (Presumed) strong computational gains of methods usually have a tradeoff.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your detailed and constructive reviews.
>Some areas were hard to understand. E.g. the motivating discussion of why some (but not all) existing methods could only reduce uncertainty.
The sentence you mentioned in the original text is: “However, this results in **any neighbor of a node will necessarily reduce the uncertainty**, even neighbors with noise or missing information, thus underestimate posterior uncertainty, as depicted in Figure 1.” In the previous sentence, we explained the reason as “Existing works [7, 35]...modeling beliefs as Dirichlet distributions and **treating neighboring nodes as observations**.” At the end of this sentence, we indicated that Figure 1 provides a further explanation of this point. Existing methods [7, 35] treat neighbors as **"observations,"** implicitly assuming that a node’s neighbors **are always evidence** and thus will necessarily reduce uncertainty.
>Limitations of the method have not really been addressed. It seems the proposed method is perfect!
>The paper checklist states that the limitations are outlined in the Conclusion (section 7). However, no limitations of the proposed methods seem to be discussed here. (The statement "However, ..." is not a limitation.) I would expect that the authors really ought to have some things to say here. (Presumed) strong computational gains of methods usually have a tradeoff.
From the derived convergence conditions of LinUProp ($\rho(\mathbf{\Psi_{1}}^{'}+\text{Diag}(\mathbf{\Psi_{2}}^{'}\mathbf{Q}))<1$), we can see that when the graph is very large, if there is strong global (most edges) homophily/heterophily, the norm of the matrix $T=\mathbf{\Psi_{1}}^{'}+\text{Diag}(\mathbf{\Psi_{2}}^{'}\mathbf{Q})$ will be large, which may cause LinUProp to fail to satisfy the convergence condition of the spectral radius of $T$ being less than 1. However, if only local (a few edges) strong homophily/heterophily exists in a large-scale graph, LinUProp can still converge. This is a limitation of LinUProp, and we will include the above discussion in the camera-ready version.
>It isn't clear to me whether the proposed method produces unbiased results ......
The term "unbiased" in the abstract was intended to communicate that our method aims to avoid the underestimation of uncertainty, which some other methods might suffer from (NETCONF/SocNL). We didn't mean to claim that our method is mathematically unbiased in the statistical sense without further rigorous proof, although we validated the consistency of LinUProp with MC simulation through the Correctness experiment in Section 5.1. This issue appears only in two sentences in the abstract and one sentence in the conclusion, and is not mentioned elsewhere in the original text. To avoid any confusion, we propose to revise the two sentences in the abstract and the one sentence in the conclusion. Additionally, we have not yet studied the relationship between the speed of LinUProp and its unbiasedness, which might be a worthwhile topic for future research.
- Abstract
- There are fast UQ methods for graphical models with closed-form solutions and convergence guarantee but with ~~biased~~ uncertainty underestimation.
- We propose LinUProp, a UQ method that utilizes a
novel linear propagation of uncertainty to model uncertainty among related nodes additively instead of multiplicatively, to offer linear scalability, guaranteed convergence, and ~~unbiased~~ closed-form solutions that do not underestimate uncertainty.
- Conclusion
- Unlike its competitors, LinUProp does not assume neighbors necessarily reduce uncertainty and thus avoids ~~biased~~ uncertainty underestimation.
>I couldn't find anywhere which described what "BP" or "NETCONF" were (Figures 5-6). This somewhat reduces the readers ability to understand the information being presented.
As stated in the captions of Figures 5 and 6, BP and NETCONF are two methods for **inferring posterior beliefs**. We have detailed Belief Propagation (BP) in the Preliminaries and introduced how NETCONF works through a simple case in Figure 1. The key difference between the two is that BP can only provide a point estimate of the posterior beliefs, whereas NETCONF can derive a Dirichlet posterior from which a "Certainty Score" can be obtained. This "Certainty Score" can be used to estimate the uncertainty of the posterior, and its calculation method is provided in lines 275-277.
>Tables 2 and 3 (Appendix) have generously bolded the results corresponding to the papers methodology, but the +/- standard errors of the means presented strongly suggest that in many cases there is no statistically significant difference between many of the methods (bolded, or otherwise). This would seem to undermine the accuracy performance claims.
Thank you for your question regarding standard deviations and statistical significance. We have added annotations addressing these two points to Tables 2 and 3 in the global response PDF. Specifically, we have used $\underline{\text{underlined}}$ values to emphasize the method with the lower standard deviation between the LinUProp winner and the non-LinUProp winner. We have also used superscripts to indicate significant superiority between the LinUProp winner and the non-LinUProp winner (pairwise t-test at a 5% significance level (*) and 10% significance level ($\dagger$)). We can observe that the LinUProp-based methods (BB/LC+BB) consistently maintain low standard deviations across 4 datasets, 2 inference methods, and 4 labeling accuracies. This level of consistency is not exhibited by other methods, even though some approaches may achieve comparable accuracy to LinUProp in certain specific cases.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the detailed responses.
* Some areas were hard to understand.
Thank you for the explanation. Of course, in the paper I had read all of these parts (several times) and still struggled to understand (merely saying something doesn't make it understandable), and I see another reviewer had similar issues. Information density is one part of this. It would be good if the explanations in this area were given a once over to perhaps improve the explanation in case others have similar issues. Similarly for the visibility of BP/NETCONF - if one has to look into a densely written figure caption to find the reference to support a method mentioned in another figure caption ... something is wrong with the presentation.
* Limitations
Thank you - limitations should always be clearly stated and not hidden. As a result, I do think that some mention needs to be given in the current paper for the computational overheads of LinUProp for the simulations run. If the method takes 10 times longer to run than the competitor and only achieves a small improvement, this is information that should be provided to the reader, right?
* Statistical significance in Tables 2 & 3
Thank you for the modified tables - this is much more informative than before. It now offers at-a-glance understanding of where there is evidence that LinUProp actually does beat the best of the competitors. 10% significance is probably a stretch though. So now it's notable that there is only a small amount evidence for improved performance over BP for PolBlogs (Table 2) and Cora, and not much at all over NETCONF for almost all datasets. I hope that this information will be fairly discussed in Section 5, as there is *some* evidence of improved performance in *some* cases, but the overall evidence here isn't strong.
I note that you have control over the sample size (here n=10) for this t-test, so you could potentially improve these outcomes with improved numbers of simulations (which should have been done in the first place; hint). I also note that its ok to get exactly the same performance as a competitor method if you're doing the analysis much faster (see above point). Finally, its also ok to get the same performance as a competitor for exactly the same computational overheads if there is conceptually some other advantage to be had, but this case is the authors to make.
* Thank you for your other responses (no further comments).
---
Reply to Comment 1.1.1:
Title: Further Response to Reviewer rPk6 (2)
Comment: > - Limitations
Thank you - limitations should always be clearly stated and not hidden. As a result, I do think that some mention needs to be given in the current paper for the computational overheads of LinUProp for the simulations run. If the method takes 10 times longer to run than the competitor and only achieves a small improvement, this is information that should be provided to the reader, right?
The computational cost of LinUProp has been **explicitly** presented in Figure 4(b). Figure 4(b) demonstrates LinUProp's linear scalability, where the last data point of each dataset's corresponding line represents LinUProp's runtime when including all edges in that dataset. To address your concerns about LinUProp's computational cost relative to competitors, we will further discuss the comparison of computation times between LinUProp and its competitors.
Under the same experimental environment as the linear scalability experiments above, we first tested NETCONF on 4 datasets, comparing its computation time when including all edges with that of LinUProp. The following table shows the runtime (in seconds):
| Method | Cora | Citeseer | Pubmed | Polblogs |
|----------|--------|----------|--------|----------|
| NETCONF | 0.0225 | 0.0216 | 0.0627 | 0.0126 |
| LinUProp | 0.0256 | 0.0181 | 0.0559 | 0.0084 |
From the table above, it's evident that the computation times of NETCONF and LinUProp are similar. However, LinUProp's performance shows a significant advantage over the NETCONF-based UQ method, which can be seen at a glance from Tables 2 and 3 (BB/LC+BB vs CS/LC+CS).
Regarding the unbiased MC simulation method, it is highly time-consuming and therefore difficult to use as an algorithm for selecting nodes in active learning. In our experiments in Section 5.1, we verified the consistency between LinUProp and MC (100,000 samples) on small graphs. Below are the times (in seconds) required for **just one** sampling on each dataset (in seconds):
| Cora | Citeseer | Pubmed | Polblogs |
|-------|----------|---------|----------|
| 4.4634 | 4.2388 | 52.5918 | 9.8378 |
For the "Least Confidence" and "Entropy" methods, they only require simple calculations of confidence and entropy for each node, respectively. However, calculating the confidence or entropy for each node requires first computing the posterior probability for each node. If BP is used as the inference method, obtaining the posterior probability for each node requires running BP once, which takes as long as performing one MC sampling in the table above, and is **significantly slower than LinUProp**. If NETCONF is used as the inference method, then its runtime will be similar to that of LinUProp. We will clarify LinUProp's unique advantages compared to these two methods in our response to the next question.
---
Rebuttal 2:
Comment: Thank you for your feedback. Regarding the concerns you raised, it's our pleasure to provide further clarification:
>- Some areas were hard to understand.
Thank you for the explanation. Of course, in the paper I had read all of these parts (several times) and still struggled to understand (merely saying something doesn't make it understandable), and I see another reviewer had similar issues. Information density is one part of this. It would be good if the explanations in this area were given a once over to perhaps improve the explanation in case others have similar issues.
Thank you for helping us understand that the following description alone might still confuse some readers: "Existing methods [7, 35] treat neighbors as **"observations,"** implicitly assuming that a node's neighbors **are always evidence** and thus will necessarily reduce uncertainty."
A fundamental assumption of existing works [7, 35] (NETCONF/SocNL) is that "a node's neighbors are always evidence." Let's consider a simple social network example where the task is to determine whether each user is a music enthusiast. NETCONF/SocNL use the Beta distribution ($\mathcal{B}(\alpha,\beta)$) to describe the uncertainty of class probabilities in binary classification tasks, where parameters $\alpha$ and $\beta$ are pseudo-counts. In this context, $\alpha$ and $\beta$ can be understood as virtual number of times a user has been observed to like or dislike music, thus a larger $\alpha+\beta$ indicates lower uncertainty. Let's assume that a user who claimed to be a music enthusiast in their profile follows $\mathcal{B}(10,1)$, one who claimed not to be follows $\mathcal{B}(1,10)$, and one who didn't provide this information follows $\mathcal{B}(1,1)$ (equivalent to a uniform distribution over the interval [0, 1]).
User A hasn't provided information about being a music enthusiast$(\mathcal{B}(1,1))$, but has two friends who are($\mathcal{B}(10,1)$). User B also hasn't provided this hobby information$(\mathcal{B}(1,1))$, and in addition to having two music enthusiast friends($\mathcal{B}(10,1)$), also has two friends who haven't specified their hobby information$(\mathcal{B}(1,1))$.
NETCONF/SocNL, based on the assumption that "neighbors are always evidence," will use neighbors as evidence to update A and B. **This is achieved by adding the parameters of neighbors to one's own.** Under this assumption, any neighbor will inevitably increase $\alpha+\beta$, thereby leading to lower uncertainty after the update. Returning to the above example, after updating, A will follow $\mathcal{B}(1+10+10,1+1+1)=\mathcal{B}(21,3)$, while B will follow $\mathcal{B}(1+10+10+1+1,1+1+1+1+1)=\mathcal{B}(23,5)$. At this point, although both A and B would be considered music enthusiasts, B's uncertainty would be lower because B has more evidence (neighbors). Despite the two additional neighbors not providing information, B would still be considered to have lower uncertainty. Consequently, NETCONF/SocNL would be more confident in considering B a music enthusiast.
> Similarly for the visibility of BP/NETCONF - if one has to look into a densely written figure caption to find the reference to support a method mentioned in another figure caption ... something is wrong with the presentation.
Thank you for your valuable feedback. To make it easier for readers to find the introduction to NETCONF without having to look at the caption of another figure, we will explicitly include the introduction to NETCONF in the Preliminary section. Currently, there is already an introduction to BP in the Preliminary section. After the improvement, BP and NETCONF will be presented together, enhancing readability.
Title: Further Response to Reviewer rPk6 (1)
---
Rebuttal Comment 2.1:
Comment: Thank you - this expanded discussion/example is *so* much clearer. The extra space that this will take up in the paper will be very much worth it.
---
Rebuttal 3:
Title: Further Response to Reviewer rPk6 (3)
Comment: >- Statistical significance in Tables 2 & 3
Thank you for the modified tables - this is much more informative than before. It now offers at-a-glance understanding of where there is evidence that LinUProp actually does beat the best of the competitors. 10% significance is probably a stretch though. So now it's notable that there is only a small amount evidence for improved performance over BP for PolBlogs (Table 2) and Cora, and not much at all over NETCONF for almost all datasets. I hope that this information will be fairly discussed in Section 5, as there is _some_ evidence of improved performance in _some_ cases, but the overall evidence here isn't strong.
I note that you have control over the sample size (here n=10) for this t-test, so you could potentially improve these outcomes with improved numbers of simulations (which should have been done in the first place; hint). I also note that its ok to get exactly the same performance as a competitor method if you're doing the analysis much faster (see above point). Finally, its also ok to get the same performance as a competitor for exactly the same computational overheads if there is conceptually some other advantage to be had, but this case is the authors to make.
Thank you for your further questions. We will now proceed with a more in-depth analysis of Tables 2 & 3, as well as discuss the advantages of LinUProp compared to its competitors.
- From the perspective of runtime, as we replied in the previous question:
- when using BP as the inference method (Table 2), the competing methods LC/Entropy are significantly slower than LinUProp
- when using NETCONF as the inference method (Table 3), the runtime of LC/Entropy/CS/CS+LC is similar to that of LinUProp.
- Further analysis of Tables 2 & 3
- In Table 3, LinUProp's winners (BB/LC+BB) significantly outperform the winners of NETCONF-based UQ methods (CS/LC+CS) in all cases (pairwise t-test at a 5% significance level). Since NETCONF is never the winner among non-LinUProp methods, no significance is marked for it.
- Beyond statistical significance, evaluating the stability of methods is also crucial. LinUProp achieves the highest mean accuracy in most cases compared to its competitors (28/32). Although not always significant, none of the competitors can maintain performance as good as LinUProp across all 4 datasets, 2 inference methods, and 4 labeling accuracies. Specifically, in Tables 2 & 3, while LinUProp isn't significantly better than LC/Entropy in some cases, it never exhibits high standard deviations (maximum standard deviation less than 7%) or very poor accuracy in any situation. In contrast, LC only achieves 55% accuracy on Polblogs in Table 3 across all four labeling accuracies, while LinUProp consistently exceeds 80%. The Entropy method shows high standard deviations on the Polblogs dataset in both tables (some exceeding 14%), whereas LinUProp generally has lower standard deviations compared to the non-LinUProp winners in the majority of all cases (marked with underlines).
- Unique advantages of LinUProp compared to competitors
- LinUProp offers **interpretability** (Eq. (12)). For the posterior uncertainty of a node calculated by LinUProp, we can trace the contribution of any other node to that node's uncertainty, even if it's several hops away. This feature is not available in competing methods but is crucial for users to trust UQ results.
- LinUProp has a **solid theoretical foundation** (Section 4.2). We decompose the expected prediction error of the graphical model and prove that the uncertainty computed by LinUProp is the generalized variance component of the decomposition.
---
Rebuttal Comment 3.1:
Comment: Thank you. This is now a much more substantial, clear, and fair discussion of the relative performance and merits of LinUProp versus the other methods. As a reader, I feel I now have a better understanding of the contributions and performance of this method.
Overall I'd like to thank the Authors for being responsive and providing the additional results and discussion that this paper needed. I feel it is now in a good place, and I will increase my score.
---
Rebuttal 4:
Comment: Thank you for your valuable feedback, which has greatly improved our work. We're pleased that the expanded discussion and examples provide clarity, and we're glad the additional comparisons enhance the grounding of our results. Your insights on the performance and merits of LinUProp have helped create a clearer understanding. We appreciate your responsiveness and support in refining our paper. | Summary: This paper considers the problem of calculating uncertainty in the infererence results on probabilistic graphical models. Their method is based on a previously published linearization of the belief propagation method, to provide scalability, interpretability, and unbiasedness. The benefits of the new algorithm is demonstrated through experiments comparing with existing methods including Monte Carlo sampling, and uncertainty-based active learning on graphs.
Strengths: Originality: the studied problem is unique in that a linearly scalable, interpretable, provably convergence uncertainty quantification methods on graph is still missing. The proposed method is novel with a new definition of uncertainty quantity that is based on interval width that is decomposed into bias and variance. Such a definition is not seen in prior work.
Quality: Technically, the formula and algorithm are developed rigorously and I don’t find errors therein. Their experiments protocols are designed carefully to evaluate their methods and prove the claimed advantages.
Clarity: the process of conducting the experiments, including datasets, graph constructions, baseline setup, are clearly described and reproducing the results is feasible. Figure 2 helps in understanding the dimension and operations of the linear operators on graphs.
Significance: their problem definition targets at gaps in uncertainty quantification on graphical model. In particular, previous methods, such as Monte Carlo and other uncertainty calculation methods on graphs, are either not scalable, biased, without convergence guarantee, or non-interpretable. In this aspect, the work contributes to the need of a scalable, interpretable, provably convergent, and unbiased uncertainty quantification algorithm for graphical models. In terms of theoretical significance, the decomposition of the computed interval width further makes sense of their proposed definition of uncertainty. Unlikely previous work that use variance of a distribution as a notion of uncertainty, the computed uncertainty is in fact a bias and variance term, thus shedding lights on the seemingly unmotivated definition of interval width.
Weaknesses: -The proposed method is based on a previously developed linearization method, making their innovation limited.
-In the section of “Correctness of quantified uncertainty”, the correctness is only validated through a small simple graph, and no experiments are conducted on larger-scale graphs. Maybe the authors can explain why it is so designed?
Technical Quality: 3
Clarity: 4
Questions for Authors: -In Figure 6, the legend shows 7 methods, while in some of the subfigures, there are only 5-6 curves. Can you explain why?
-In the same figure, LC seems to have performance between that of LC+BB and LC+CS. Is there any explanation about this observation?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Please see the weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your detailed and constructive reviews.
>The proposed method is based on a previously developed linearization method, making their innovation limited.
Indeed, we did use a conclusion from LinBP (Centered BP) in our derivation. However, LinUProp fundamentally differs from LinBP in its goals, and we have several unique contributions:
- **Different Goals**: The goal of LinBP is to infer the posterior beliefs of nodes, providing a point estimate of these beliefs. In contrast, LinUProp aims to quantify the uncertainty in the posterior beliefs, serving as a UQ method that offers the uncertainty in the form of interval widths.
- Since LinBP is not a UQ method, our contributions in the field of UQ are distinct from those of LinBP:
- LinUProp addresses the issue of underestimating posterior uncertainty found in related works (e.g., NETCONF/SocNL).
- The posterior uncertainty of nodes computed by LinUProp is interpretable, meaning we can understand the contribution of each other node to a specific node's posterior uncertainty by applying a Neumann series expansion to LinUProp.
- We decompose the expected prediction error of the graphical model and prove that the uncertainty computed by LinUProp is the generalized variance component of the decomposition.
>In the section of “Correctness of quantified uncertainty”, the correctness is only validated through a small simple graph, and no experiments are conducted on larger-scale graphs. Maybe the authors can explain why it is so designed?
In the section “Correctness of Quantified Uncertainty,” we explained the reason for directly verifying the correctness of LinUProp on small graphs through MC simulations:
"MC simulations are adopted as the ground-truth due to their ability to provide accurate approximations through a sufficient amount of sampling [30], which are feasible for small-scale graphs."
For large-scale graphs, each sampling of the posterior belief for all nodes takes significantly longer than for small graphs, and more samples are needed to ensure accurate approximations. For example, on the Pubmed dataset, under the same experimental conditions as on small graphs, a single sampling of the posterior belief takes about 1 minute. Therefore, even maintaining the same number of samples as in the small graph experiment (100,000 times) would take approximately 70 days. Consequently, for large-scale graphs, we, like related works, validate the effectiveness of LinUProp in downstream tasks such as active learning (which we use)[1] , OOD detection [2], and robustness against node feature or edge shift [3].
[1] Kang, Jian, et al. "JuryGCN: quantifying jackknife uncertainty on graph convolutional networks." SIGKDD, 2022.
[2] Zhao, Xujiang, et al. "Uncertainty aware semi-supervised learning on graph data." NIPS, 2020.
[3] Stadler, Maximilian, et al. "Graph posterior network: Bayesian predictive uncertainty for node classification." NIPS, 2021.
>In Figure 6, the legend shows 7 methods, while in some of the subfigures, there are only 5-6 curves. Can you explain why?
The curves corresponding to the BB and LC+BB methods are almost overlapping, which initially makes them appear as a single curve. However, upon closer inspection of the markers (such as $\triangle$ and $\triangledown$), it becomes evident that there are actually two distinct curves. The CS and LC+CS methods ($\triangleleft$ and $\triangleright$) exhibit a similar situation, ultimately making the 7 curves appear as only 5.
>In the same figure, LC seems to have performance between that of LC+BB and LC+CS. Is there any explanation about this observation?
The analysis of Figure 6 in the paper (lines 292-296) has already provided an explanation for this observation. This phenomenon nicely validates the potential issues with NETCONF that we mentioned in Figure 1: NETCONF (CS/LC+CS) prioritizes labeling low-degree nodes due to their inherent assumption that neighbors necessarily reduce uncertainty. As a result, nodes with many neighbors are often mistakenly viewed as having low uncertainty and are left unlabeled, which further leads to high uncertainty in the majority of nodes, ultimately causing LC+CS to perform worse than LC. In contrast, LinUProp (BB/LC+BB), which appropriately incorporates the uncertainty of neighbors, does not have this issue of underestimating uncertainty and thus results in LC+BB performing significantly better than LC. | Summary: A method for quantifying the uncertainty in the graphical model is proposed. The authors claimed that the method is superior over prior methods including NETCONF and Monte Carlo in the aspects of scalability, interpretability, and unbiasedness. Active learning that uses node uncertainty estimation for unlabeled node selection is conducted to show the benefits of the computed uncertainty.
Strengths: - I find the decomposition of the prediction error into bias and generalized variance novel, and the assignment of the interval width to the variance term is creative.
- The submission is well-prepared and contain necessary components to make it a completed piece of work.
- The presentation of preliminaries clearly show the background to facilitate the understanding of the more complicated part in Section 3. Overall, the paper is well-organized with clear logics.
- The proof in Section 4 provide significant insight into the belief propagation algorithm and also the uncertainty quantification. Experiments on various setting proved the usefulness of the studied problem and the LinUProp method.
Weaknesses: - I have doubt about the interpretability of LinUProp, as it involves high-order terms such as $T^2$, $T^3$, etc. that are not easy to be understood.
- Eq. (8) seems to be a simple extension of the original LinBP method.
- It is not clear what are the $\mathbb{x}$, $\mathbb{y}$, and $\mathbb{P}$ terms in the proof in Eq. (9). What’s their relationship to the LinUProp method?
Technical Quality: 3
Clarity: 3
Questions for Authors: - The uncertainty is defined using interval width. Does the location of the interval matter and why?
- Due to the proof of Eqs. (14) and (15), is it possible to compute the generalized variance component directly without LinUProp?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The clarity of Figure 2 can be improved: what is H_12, H_23, e_1, etc.? The authors should point to the definitions in the main texts or to label these symbols in the figure.
The GNN models are more popular, while its uncertainty quantification has been studied. Though GNN and probabilistic graphical models are not comparable, but I expect the authors to have certain discussion of GNN and make potential connections between methods for these two sorts of models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your detailed and constructive reviews.
>I have doubt about the interpretability of LinUProp, as it involves high-order terms such as $T^2$,$T^3$, etc. that are not easy to be understood.
When we say that LinUProp is interpretable, we mean that for the posterior uncertainty of each node calculated by LinUProp, we can determine the contribution of each other node to the posterior uncertainty of that node. The matrices $T^2$,$T^3$, etc., are intermediate steps obtained from applying a Neumann series expansion to the closed-form solution of LinUProp. We do not need to understand their specific meanings; we only need to sum them to obtain the contribution of each other node to the current node's uncertainty. This allows us to achieve interpretability for LinUProp.
Moreover, since the necessary and sufficient condition for LinUProp convergence is that the spectral radius of $T$ is less than 1, when the algorithm converges, the higher-order terms become very small. Therefore, in practice, it's often sufficient to compute only a few lower-order terms to understand the contribution of each other node to the uncertainty of a given node.
>Eq. (8) seems to be a simple extension of the original LinBP method.
Eq. (8) is the closed-form solution of LinUProp:
$$\text{vec}(\mathbb{B})=(\mathbf{I}-\mathbf{\Psi}_{1}^{'}-\text{Diag}(\mathbf{\Psi_2}^{'}\mathbf{Q}))^{-1}\text{vec}(\mathbb{E}).$$
The closed-form solution of LinBP is
$$\text{vec}(\hat{\mathbf{B}})=(\mathbf{I}-\hat{\mathbf{H}}\otimes\mathbf{A}+\hat{\mathbf{H}}^2\otimes\mathbf{D})^{-1}\text{vec}(\hat{\mathbf{E}}).$$
At first glance, Eq. (8) and LinBP appear somewhat similar, as both conform to the form $\mathbf{y}=(\mathbf{I-P})^{-1}\mathbf{x}$. However, simply replacing prior and posterior beliefs in LinBP with interval widths ($\text{vec}(\mathbb{E})$ and $\text{vec}(\mathbb{B})$) does not yield LinUProp. This is because LinUProp's goal is to calculate the uncertainty of each node's posterior belief, derived from the upper and lower bounds of messages and beliefs through a series of derivations, ultimately leading to Eq. (8). This cannot be achieved by merely extending LinBP; for a detailed derivation, please refer to Appendix A.1.
>It is not clear what are the $\mathbb{x},\mathbb{y}$, and $\mathbb{P}$ terms in the proof in Eq. (9). What’s their relationship to the LinUProp method?
After Eq. (9), on line 157, $\mathbf{y}=(\mathbf{I-P})^{-1}\mathbf{x}$ simply represents the general form of a linear equation system that can be solved using the Jacobi method. We merely want to convey that the closed-form solution of LinUProp conforms to this general form. Therefore, the convergence conditions are consistent with the Jacobi method. Specifically, $\mathbf{y}$ corresponds to $\text{vec}(\mathbb{B})$, $\mathbf{x}$ to $\text{vec}(\mathbb{E})$, $\mathbf{P}$ to $\mathbf{\Psi}_{1}^{'}+\text{Diag}(\mathbf{\Psi_2}^{'}\mathbf{Q})$.
>The uncertainty is defined using interval width. Does the location of the interval matter and why?
This can be concluded from the derivation of LinUProp. At the beginning of the derivation (Eqs. (16-19)), the upper and lower bounds of the messages and beliefs are involved. However, by Eq. (20), after subtracting the lower bound from the upper bound of the messages, only the interval width remains in the equation, and the specific location of the interval is no longer considered. Therefore, the specific location of the interval is not important for LinUProp.
>Due to the proof of Eqs. (14) and (15), is it possible to compute the generalized variance component directly without LinUProp?
Through Eqs. (14) and (15), we derived **the variance component** and further proved that the variance component is a special case of LinUProp. This ultimately led to the conclusion that LinUProp is the generalized variance component. It's important to note that "the generalized variance component" answers what the uncertainty computed by LinUProp represents; we cannot directly calculate "the generalized variance component" itself.
>The clarity of Figure 2 can be improved: what is H_12, H_23, e_1, etc.? The authors should point to the definitions in the main texts or to label these symbols in the figure.
We have stated in the caption of Figure 2 that the inputs to LinUProp include (1) Uncertainty in prior beliefs of each node represented as interval widths (2) Edge potentials. However, we did not explicitly specify in the caption how these symbols correspond to (1) and (2).
In fact, $\mathbb{e}_1,\mathbb{e}_2,\mathbb{e}_3$ correspond to (1), and
$\mathbf{H_{12}},\mathbf{H_{23}}$ correspond to (2). Although these symbols are defined in lines 123-128, explicitly mentioning this correspondence in the figure caption would make it clearer. We will explicitly include this correspondence in the caption of Figure 2 in the camera-ready version.
>The GNN models are more popular, while its uncertainty quantification has been studied. Though GNN and probabilistic graphical models are not comparable, but I expect the authors to have certain discussion of GNN and make potential connections between methods for these two sorts of models.
Some studies have attempted to combine GNNs with probabilistic graphical models to leverage the strengths of both approaches. For instance, this combination can provide interpretability to GNNs [1] or achieve performance that surpasses using GNNs alone in certain tasks [2]. Solely relying on UQ methods for GNNs may be insufficient to compute uncertainties for these hybrid approaches. Therefore, integrating LinUProp with existing UQ methods for GNNs could potentially be more promising.
[1] Vu, Minh, and My T. Thai. "Pgm-explainer: Probabilistic graphical model explanations for graph neural networks." NIPS, 2020.
[2] Qu, Meng, Yoshua Bengio, and Jian Tang. "Gmnn: Graph markov neural networks." ICML, 2019.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed responses. I will keep my score after reviewing other referees' comments and the manuscript.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback and for raising important points to improve our paper. We appreciate your insights. | Rebuttal 1:
Rebuttal: Tables 2 and 3 have been updated with annotations for the standard deviation and t-test results.
Pdf: /pdf/77fe82473d1ed7d2d458119547c7d4b3d530418c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Online Bayesian Persuasion Without a Clue | Accept (spotlight) | Summary: The paper studies online Bayesian persuasion with "no clue", i.e., the sender knows nothing about the prior distribution or the receiver's utility function a priori (it is however assumed that there exists a prior distribution and states are drawn iid from it). The main results are (1) in the online model where the sender tries to learn a good signaling scheme on the fly while not losing too much utility in the process, there is an algorithm that achieves square-root regret, (2) lower bounds showing that the dependency of this regret bound on various parameters are in a sense unavoidable, and (3) in the PAC model, an adapted algorithm that achieves a nontrivial sample complexity bound.
Strengths: The problem appears meaningful and technically interesting. The idea of slices is natural, clean and powerful. The overall plan is clearly plausible, and yet not obviously viable. I'm happy to see that the authors were able to make it work, which seems to have involved nontrivial effort. The bound is reasonably tight.
Weaknesses: Ideally an instance-optimal bound would be more exciting, but perhaps that's too much to ask for. I'd also like to see more discussion of the PAC model and the optimality of the bound in that model (if applicable).
Technical Quality: 4
Clarity: 3
Questions for Authors: (also including detailed comments)
Line 22, "such as, e.g., ...": a bit repetitive?
Line 92: superscript missing in the definition of $u_\theta^s$?
Footnote 1: might as well quickly define int while you are defining $\Delta$
Line 110: missing ")"
Sec 3: the title was a bit confusing to me, but I think the section as a whole is very helpful for a non-expert reader to develop intuition. It's also very clearly written.
Line 218: "sufficiency"
Overview part of sec 4: my first instinct is that explore-then-commit strategies normally give regret bounds like $O(T^{2/3})$. This will probably become clear soon, but it might help to quickly discuss why you were able to get square-root regret here. (My guess is somehow you managed to stay in the "realizable" regime where learning up to eps requires only 1/eps rounds.)
Sec 6: maybe quickly introduce the setup first (in particular, what is a "sample" here)? Relatedly, it's also interesting that here you get $1/gamma^2$ instead of $1/\gamma$,
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: No concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for the insightful feedbacks and for pointing out some typos. We will incorporate these suggestions and correct the typos in the final version of the paper.
> Overview part of sec 4: my first instinct is that explore-then-commit strategies normally give regret bounds like $O(T^{2/3})$. This will probably become clear soon, but it might help to quickly discuss why you were able to get square-root regret here. (My guess is somehow you managed to stay in the "realizable" regime where learning up to eps requires only 1/eps rounds.)
The Reviewer is right. The main reason we do not incur in a $O(T^{2/3})$ regret is that we can build a space of signaling schemes slices in $O(1 / \epsilon)$ rounds, such that, for each element, it is possible to associate a receiver's best response in $O(1 / \epsilon)$ rounds.
Technically, this is made possible by employing the multiplicative Chernoff bound, which allows us, with $O(1 / \epsilon)$ rounds, to distinguish which states $\theta \in \Theta$ have a probability of being observed smaller than $\epsilon$, i.e., $\mu_\theta \le \epsilon$.
In this way, the "exploration phase" of our explore-then-commit approach requires $O(1 / \epsilon)$ rounds, allowing us to achieve a $\sqrt{T}$ regret, by suitably choosing $\epsilon = O(1/\sqrt{T})$, instead of $O(T^{2/3})$ regret, as it is typically the case in explore-then-commit approaches.
Finally, we observe that in our "commit" phase, the sender does not commit to a fixed signaling scheme. Instead, they select a different signaling scheme $\phi_t$ computed by means of Algorithm 3, which receives as input an estimate of the prior distribution $\widehat \mu_t$ that is updated at each round.
> Sec 6: maybe quickly introduce the setup first (in particular, what is a "sample" here)?
We thank the Reviewer for giving us the opportunity to better clarify the concept of ``sample'' within our framework.
We use the term "sample" to refer to the feedbeack received by the sender when they commit to a signaling scheme.
Intuitively, in a PAC-learning problem we do not have a finite time horizon, nor we are interested in the regret.
Instead, we want to learn a $\gamma$-optimal solution by using the minimum possible number of samples (or equivalently, rounds), and we are not concerned about the regret accumulated while learning it.
---
Rebuttal Comment 1.1:
Comment: Thank you for your helpful response. I don't have further questions. | Summary: This paper studies repeated Bayesian persuasion where the sender does not know the prior distribution of the state of the world and the receiver's utility (while the receiver knows the prior and their utility and myopically best responds in each period). The authors design online learning algorithm for the sender to achieve sublinear regret $O( binom(d+n, d) n^{3/2} d^3 \sqrt{BT})$. They also prove lower bounds of $2^{\Omega(d)}$ and $2^{\Omega(n)}$. The proposed algorithm, a sophisticated explore-than-commit algorithm, works by searching the space of signaling schemes under a non-standard representation.
Strengths: (1) [Significance] This work is a significant improvement over previous works on online learning in Bayesian persuasion where the sender learns either the prior or the receiver's utility. Learning the prior and utility simultaneously seems to be challenging. The authors are able to solve this problem, which is a significant contribution.
(2) [Originality] A key technique in this work is performing high-dimensional binary search in the space of signaling schemes under a non-standard representation. In previous works, signaling schemes were usually represented by distributions over posterior distributions, and the search was performed on the space of posterior distributions. Such a technique does not work when the prior is unknown. This work instead represents signaling schemes by the vector $(\sigma_\theta(s))_{\theta \in \Theta}$, the probabilities of sending a signal $s$ under different states $\theta$. This new representation circumvents the unknown prior issue. So, I think this new representation is an interesting trick and a technical novelty.
(3) [Quality] Non-trivial lower bound results are provided, complementing the upper bounds results.
Weaknesses: (1) [Quality] The proposed algorithm seems to have an exponential running time in the worst case. Specifically, the number of vertices $\mathcal V$ of the polytopes seem to be exponential in $d$ or $n$, which leads to an exponential running time. The authors didn't discuss the computational complexity of their algorithm, nor provided a computational-hardness result.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can the authors respond to weakness (1)?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: **Suggestions:**
(1) Please define the B in algorithm 1 more clearly.
(2) In Appendix A Additional Related Works, in additional to saying what the related works did, please briefly compare those works with your work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for the insightful feedbacks. We will also incorporate the two Reviewer's suggestions in the final version of the paper.
> The proposed algorithm seems to have an exponential running time in the worst case. Specifically, the number of vertices $\mathcal{V}$ of the polytopes seem to be exponential in $d$ or $n$, which leads to an exponential running time. The authors didn't discuss the computational complexity of their algorithm, nor provided a computational-hardness result.
We thank the Reviewer for giving us the opportunity to better clarify this aspect of our paper.
The per-round running time of our algorithm is polynomial in the size of the input when either the number of receiver’s actions $n$ or that of states of nature $d$ is fixed.
We agree with the Reviewer on the fact that, if both $n$ and $d$ are not fixed, then the per-round running time may no longer be polynomial in the input size in the worst case.
We observe that Algorithm 6 can find the H-representation of the polytopes $\mathcal{X}\_\epsilon(a)$ with polynomial per-round running time.
In particular, to enumerate the vertices of the upper bounds, it can compute and query a different vertex at each round, without ever storing them all.
In the current version, given the H-representation of the polytopes, our algorithm also computes the set of all the vertices $\mathcal{V}$, and eventually employs this set to instantiate an LP, whose cardinality may be exponential in either $n$ or $d$.
However, it is possible to avoid computing these vertices and solve an equivalent linear program whose size is polynomial in $n$ and $d$.
This can be done by employing the H-representation of the polytopes $\mathcal{X}\_\epsilon(a)$ computed by Algorithm 6, and exploiting a slightly different LP.
This approach does not require to compute $\mathcal{V}$, thus achieving a polynomial per-round running time in every phase of the algorithm.
The theoretical analysis with the modified LP is almost straightforward given the results presented in the paper, as it only requires the introduction of some additional technical lemmas about polytopes.
We will include this algorithmic approach in the final version of the paper and we are happy to provide additional technical details, if the Reviewer wants to.
> (1) Please define the B in algorithm 1 more clearly. (2) In Appendix A Additional Related Works, in additional to saying what the related works did, please briefly compare those works with your work.
We thank the Reviewer for the suggestions; we will incorporate them into the final version of the paper.
---
Rebuttal Comment 1.1:
Title: Happy with authors' response
Comment: I am happy with the authors' response and raise rating to 7. Indeed, adding discussion on the computational complexity might further strengthen the paper. | Summary: The paper studies Bayesian persuasion in a learning setting with minimum knowledge about the receiver: neither the receiver's prior nor their utility function is known. In the model, the sender can commit to different signaling strategies and acquire the receiver's optimal response to each signal they send. The paper presents a learning algorithm that achieves regret sublinear in the number of rounds but exponential in the number of states. The authors further show that the exponential dependency on the number of states is inevitable by proving a matching sample complexity.
Strengths: The paper follows a line of work in the literature on Stackelberg games that study how to learn to commit optimally by querying the follower's optimal responses. I find it natural and reasonable to ask the same question in Bayesian persuasion. While the overall approach works similarly by building the follower's (receiver's) best response regions, the paper shows that there are some unique features of the persuasion setting, which require additional techniques.
The estimation of the prior, in particular, can only be done approximately. The paper nicely handles this aspect in their algorithm. The concept of slice is also novel and interesting (though the naming is somewhat less intuitive).
The results look very complete, with matching lower and upper regret bounds. The paper is also well-writen and overall clear.
Weaknesses: - The part below Definition 1 until the end of that page is a little dense and could probably be improved. The notation in this part is also a bit hard to follow. However, this is overall minor.
- Line 296: It might be better to be a bit more specific about the distinguished features here.
- Is there any justification how the receiver gets to know the sender's strategy $\phi_t$ in each round? How would the results change if the receiver only observes the sender's signal in each round?
Typos:
- Line 51: of much (how much?)
- Line 314: clean event
Technical Quality: 3
Clarity: 3
Questions for Authors: - In algorithm 2, the feedback a^t and u_t^s seem not useful? If so, better to remove this line?
- What would happen if the receiver is not truthful? And since the receiver knows their prior, does it make sense to design a mechanism to directly elicit this prior knowledge?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Nothing is mentioned, but no concern here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for the insightful feedbacks and for pointing out some typos. Specifically, we will do our best to improve the part below Definition 1 and the comparison between previous models at Line 296 in the final version of the paper.
> Is there any justification how the receiver gets to know the sender's strategy $\phi_t$ in each round? How would the results change if the receiver only observes the sender's signal in each round?
The assumption that the receiver gets to know the sender's signaling scheme (a.k.a. the commitment assumption) is inherent in every Bayesian persuasion model. Moreover, the same assumption is standard in online Bayesian persuasion settings, where the receiver observes the sender's signaling scheme $\phi_t$ at each round $t \in [T]$. See, e.g., Castiglioni et al. (2020b) and Zu et al. (2021).
Notice that, by dropping the commitment assumption, one falls in a completely different model, which is commonly referred to as "cheap talk" in the economic literature. Indeed, if the receiver only observes a signal $s \sim \phi_{t, \theta_t}$ sampled from the signaling scheme $\phi_t$ at round $t$, without knowing $\phi_t$, it is not immediate how to define a best response for the receiver. We believe that addressing online learning problems in such a different setting is a very interesting research direction that is worth addressing in the future.
> In algorithm 2, the feedback $a^t$ and $u_t^s$ seem not useful? If so, better to remove this line?
We agree with the Reviewer, we will omit it from Algorithm 2.
> What would happen if the receiver is not truthful? And since the receiver knows their prior, does it make sense to design a mechanism to directly elicit this prior knowledge?
Addressing settings with a non-truthful receiver is an interesting research direction that we intend to explore in the future. This begets considerable challenges that are out the scope of this paper.
Designing a mechanism in which the sender elicits information from the receiver's knowledge of the prior is certainly an interesting idea. However, since the sender has never access to the receiver's payoffs, it is not clear how the sender could effectively benefit from the receiver's knowledge of the prior. Indeed, receiver's best responses depend on both the unknown prior and receiver's unknown payoffs, which makes it challenging to extract information about the prior only.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses! | Summary: The paper studies online Bayesian persuasion problems in which an informed sender
repeatedly faces a receiver with the goal of influencing their behavior through the provision of payoff-relevant information. The paper considers a setting where the sender does not know anything about the prior and the receiver’s utility function. At each round, the sender commits to a signaling scheme, and, then, they observe a state realization and send a signal to the receiver based on that. After each round, the sender gets partial feedback, namely, they only observe the best-response action played by the receiver in that round.
The main results of this paper are: (1) providing a learning algorithm that achieves $\tilde{O}(\sqrt{T})$ regret, and this regret bound has exponential dependence on the number of states $d$ and the number of receiver’s actions $n$; (2) a set of lower bounds showing that such $\sqrt{T}$-dependency is optimal, and such exponential dependency on $d, n$ is also optimal; (3) extending the no-regret learning algorithm to establish the sample complexity of the Bayesian persuasion PAC-learning problem.
Strengths: I truly enjoy reading this work. I think this work significantly contributes to the recent line of research on using regret minimization to study the persuasion with uncertainty over the underlying environments. The results developed in this work are strong and interesting. The presentation of this work is crisp and the paper provides intuitions for the algorithmic challenges and its steps of algorithm design. Overall, I think this work makes a strong addition to the NeurIPS.
Weaknesses: I don’t have particular concerns about the paper. Maybe one minor point is that the authors may want to include some discussions about the computational efficiency of the designed algorithm.
Technical Quality: 4
Clarity: 4
Questions for Authors: It may be nice to present the $\tilde{O}(n^d \sqrt{T})$ regret and the $\tilde{O}(d^n \sqrt{T})$ regret as the two corollaries of the Theorem 1.
It is indeed a bit interesting to me that the unknown of both prior and receiver’s utilities make the learning problem significantly challenge: one has to incur $\sqrt{T}$-regret. Do you have intuitions on this $\sqrt{T}$-regret is mainly due to unknown receiver's utilities or the unknown priors? \
-- It seems that lower bound $\sqrt{T}$ in Theorem 3 requires 4 receiver’s actions. Does $\sqrt{T}$ lower bound also hold if receiver only has binary actions? Do authors feel the regret may be improved (e.g., $\Theta(\log T)$ or even $\Theta(\log\log T)$) if one consider only BINARY receiver action? As in the paper "EC’21-Online bayesian recommendation with no regret", the authors show that when the sender only does not know the receiver’s utilities, a regret $\Theta(\log\log T)$ by some variant of binary search is attainable but focusing on binary actions. (So I kind of feel your $\sqrt{T}$-regret may be mainly due to unknown prior of the state realizations)
I would imagine that most of the results still hold even if the receiver has a different unknown prior to the sender (but the stats are still realized from sender's unknown prior), right?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for the insightful comments. We will adopt them in the final version of the paper and we will better discuss the points suggested by the Reviewer.
> It is indeed a bit interesting to me that the unknown of both prior and receiver’s utilities make the learning problem significantly challenge: one has to incur $\sqrt{T}$-regret. Do you have intuitions on this $\sqrt{T}$-regret is mainly due to unknown receiver's utilities or the unknown priors?
The $\Omega(\sqrt{T})$ regret lower bound is a consequence of the sender's lack of knowledge of **both** the receiver's utilities and the prior distribution.
Indeed, in our setting, it is possible to construct two instances with similar prior distributions and define the receiver's utilities so that the sender gains no information to distinguish between the two instances by committing to any signaling scheme.
This is not possible if the utilities are known.
Consequently, to distinguish between the two instances, the sender can only leverage the information contained in the states of nature sampled at each round and this results in the regret being at least $\Omega (\sqrt T)$.
> It seems that the lower bound $\sqrt{T}$ in Theorem 3 requires 4 receiver’s actions. Does the $\sqrt{T}$ lower bound also hold if the receiver only has binary actions? Do the authors feel the regret may be improved (e.g., $\Theta(\log T)$ or even $\Theta(\log \log T)$) if one considers only binary receiver actions? As in the paper "EC’21—Online Bayesian Recommendation with No Regret," the authors show that when the sender only does not know the receiver’s utilities, a regret of $\Theta(\log \log T)$ by some variant of binary search is attainable when focusing on binary actions. (So I kind of feel your $\sqrt{T}$-regret may be mainly due to the unknown prior of the state realizations).
Developing a regret lower bound for instances with binary actions is an interesting research direction that we intend to explore in the future. We leave as an open problem whether it is possible to construct two instances similar to those presented in our lower bound, but with binary receiver actions, and still achieve a $\Omega(\sqrt{T})$ regret lower bound or a better upper bound can be achieved.
> I would imagine that most of the results still hold even if the receiver has a different unknown prior to the sender (but the stats are still realized from sender's unknown prior), right?
The Reviewer is right. Our algorithm can also be employed in settings where the receiver has a different prior but the states of nature are sampled from the sender's unknown prior. We will better outline this aspect in the final version of the paper. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Visual Data Diagnosis and Debiasing with Concept Graphs | Accept (poster) | Summary: The paper introduces a method for addressing the issue of inherent bias in data during the training process. This bias can lead to unreliable predictions from the model. The proposed method, called ConBias, is a new approach for identifying and mitigating Concept co-occurrence Biases in visual datasets. The method represents visual datasets as knowledge graphs of concepts, allowing for detailed analysis of misleading concept co-occurrences that reveal imbalances across the dataset. Additionally, the author demonstrates how a new clique-based concept balancing strategy can address these imbalances, resulting in improved performance on subsequent tasks. Extensive experiments show that applying data augmentation based on the proposed method leads to significant improvements in generalization performance across various datasets when compared to state-of-the-art methods.
Strengths: The paper is well-written and explains the method thoroughly. The authors' experiments on various common datasets demonstrate the method's effectiveness and the comparisons to a prior art show that it achieves state-of-the-art performance. The paper includes a thorough analysis of the method's components, justifying its contributions.
Weaknesses: In the paper, I missed the ablation study of the authors' knowledge graphs design, the impact of using a different number of graph layers, knowledge graph architectures, different initializations, etc.
Technical Quality: 3
Clarity: 3
Questions for Authors: https://arxiv.org/abs/2310.04562 proposes "ULTRA" KD foundation graph pre-training. Can using this method benefit the proposed graph structure initialization?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have properly addressed the proposed method limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments.
**Clarification on the ablation studies:**
Since the concept graph phase in ConBias is not a learning mechanism, we circumvent the need for initialization or training strategies. This is beneficial as we achieve both controllability and interpretability of the bias diagnosis process. The imbalanced cliques provide an intuitive mechanism to diagnose the spurious correlations in the data. With regard to ablation studies, since the augmented data generation phase is a learning mechanism (we use stable diffusion with a CLIP based filter), here we present ablation results (Table 3). We show that using a different generative model such as IP2P still results in significant improvements over the baselines. This result demonstrates that it is actually the bias diagnosis and clique balancing strategy of ConBias that proves to be the true novelty.
**Graph structure initialization:**
We thank the reviewer for referring to the Ultra-KD paper. Broadly, such an approach excites us immensely, where a knowledge graph encapsulates deep information about the data. For instance, in lines 317 and lines 324, we mention the potential of future work in developing more novel graph structures that can capture diverse biases beyond object co-occurrences. Foundational knowledge graphs with rich relational information may prove to be the next logical step in dataset diagnosis and debiasing. We will add this discussion to the final version, and thank the reviewer for raising this idea.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
I have reviewed your responses to my and other reviewers' concerns, and I am satisfied with them. Therefore, I am keeping my original score. | Summary: The authors present ConBias, which is a novel approach for diagnosing and de-biasing co-occurrence bias in datasets. Unlike previous works, ConBias addresses both diagnosis and de-biasing / balancing strategy, to target the data augmentation specifically to address spurious concept co-occurrences. The main components of ConBias are as follows:
1. New concept graph-based analysis framework which involves occurrences of common concepts and classes in the data in a graph-form to identify spurious correlations
2. Identify main cliques that occur across all classes, and determine the frequency of occurrence with respect to each classes to find spuriously correlated concepts
3. re-balance the dataset based on the imbalanced concept cliques. specifically, generate new backgrounds for the objects using an inpainting-based augmentation.
Then they perform a thorough evaluation of their new re-balanced dataset across various evaluation protocols including the class-balanced test data and the OOD test data, as well as other metrics for shortcut learning. The results show that ConBias consistently outperforms other existing baselines. The authors also show ablations to justify their design decisions around the graph structure, as well as the generative model choice.
Strengths: Originality: The work demonstrates originality by introducing targeted dataset-debiasing based on concept co-occurrence diagnosis within the dataset augmentation space. Unlike previous approaches that often debias without diagnosing or only diagnose biases within datasets without suggesting effective debiasing methods directly to the data, this method focuses on improving the robustness of binary classification tasks through meticulous dataset examination.
Quality: The paper presents a novel approach that consistently outperforms existing debiasing augmentation methods. The experimental setup spans multiple datasets across various domains, providing robust justification for the proposed approach. Results are clearly presented, demonstrating reasonable improvements in performance metrics.
Clarity: The paper is well-written and logically organized, making it easy to follow the ConBias pipeline. Graphs and illustrations are informative, enhancing understanding of each step and experimental setup.
Significance: This work introduces a debiasing method aimed at reducing overfitting to spurious correlations. This is significant as it can improve the reliability of vision tasks across various applications, especially in domains where robust performance on out-of-distribution (OOD) cases is crucial, such as healthcare diagnosis
Weaknesses: The binary classification test bench used in the experiments may be insufficient in scope to fully demonstrate the potential of the proposed method. While the results show promise, it remains unclear if the method's performance scales effectively to multiple sets of classes, which is crucial for assessing its broader impact and significance. Moreover, in scenarios with multiple sets of classes, there exist spurious correlations across different subsets that require a more sophisticated and targeted sampling strategy than what the current approach, which extracts shared cliques across all classes, provides.
In addition, I believe further justification or benchmarking against debiasing methods in the feature space is necessary. Although this work pioneers combined diagnosis and debiasing in the dataset space, previous research has focused on diagnosing and debiasing models at training time or in feature spaces. Earlier methods required dataset bias labels, while recent advancements explore shortcut learning without explicit labels. The paper briefly mentions choosing the dataset space for enhanced interpretability but does not fully justify the computational trade-offs involved in generating balanced data as well as concept labels. The work would benefit significantly from additional benchmarking against state-of-the-art feature space analysis and debiasing methods where they are able to demonstrate improvements or providing a detailed rationale for its focus on the dataset debiasing.
Technical Quality: 3
Clarity: 4
Questions for Authors: Related to the preceding section, I have several inquiries for the authors:
1. Could you provide insights into how this method might scale effectively in a multi-class classification scenario? What are your expectations regarding its performance scalability?
2. Could you clarify why additional evaluation metrics were exclusively presented for UrbanCars and not for Waterbirds or Coco-GB datasets?
3. How precisely is the upsampling conducted concerning different sets of cliques? While I understand the additional image count aligns with ALIA's methodology for fair comparison, it remains unclear how many images this comprises, how various co-occurring object sets contribute to the augmentation dataset distribution, and the criteria used in making these decisions.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: Yes, the authors sufficiently address limitations and societal impact in the final section of their paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the encouraging review and the detailed feedback.
**Extension to multi-class tasks and scalability:**
ConBias can be conveniently extended to the multi-class setup. For this, we note the definitions in line 142 and line 146. While the common clique computation remains the same, for the imbalanced clique computation, instead of taking the pairwise absolute difference between classes, we can use a different metric such as variance and entropy to estimate the degree of imbalance among multiple classes. Functionally, the imbalanced set would contain the same information, i.e. how disproportionately is one concept combination (1-clique, 2-clique, etc.) represented with respect to a particular class relative to the other classes? In addition, in the attached PDF for the rebuttal (Figure 1), we also show how ConBias can diagnose biases in a complex multi-class dataset such as Imagenet-1k.
We will add this discussion to the final version and thank the reviewer for the insightful question.
For scalability, we invite the reviewer to refer to Section H in the Supplementary section for details on runtime. In general, given $K$ classes and $C$ concepts, the graph clique enumeration is expected to grow in $O(exp(|K+C|)$. However, in practice, we find that constraining clique sizes to $k<=4$ leads to interpretable bias combinations, with no significant effect of the exponential runtime. We agree that this is a heuristic, and more efficient clique enumeration methods can be developed (lines 314-315). We will update the paper to reflect this point raised by the reviewer.
**Comparison to debiasing methods based on the feature space:**
ConBias is a data-centric method rather than a model-centric method. We wish to intervene in the data directly rather than using a specific model as a proxy (lines 80-83). Once the biases are diagnosed, we can controllably and reliably generate fair data that improves model debiasing and generalization capabilities on downstream tasks.
A direct benchmarking against feature based methods would not be feasible since it is difficult to control for the effect of the augmented dataset. This is why our baselines, such as ALIA, also do not compare against feature-based debiasing methods. To ensure a fair comparison for data debiasing techniques, we need to ensure that the effect of adding data to the training set is adjusted for. In a pure feature debiasing based setup, this would be hard to control. However, the reviewer raises an important point about the dependence between data and feature debiasing. We agree that these two are not mutually exclusive. In fact, this is why we present results in Figure 6 where we visualize the classifier feature heatmaps after retraining. We demonstrate that the features learned by the classifier are indeed the core features, and not spurious features. On each dataset in Figure 6, we show that the classifier focuses on the object of interest (the core features), and not the background/gender/co-occurring object - which are precisely the spurious cues.
**Evaluation metrics:**
In Table 4, we evaluate the multi-shortcut abilities of ConBias. Multiple shortcut mitigation is a recent robustness problem introduced by Li, Zhiheng, et al. "A whac-a-mole dilemma: Shortcuts come in multiples where mitigating one amplifies others." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. UrbanCars is the standard dataset for evaluating multiple shortcut robustness, with BG-Gap, CoObj-Gap, and BG+CoObj-Gap as the evaluation metrics. Both Waterbirds and COCO-GB are single shortcut datasets, and as such, these metrics do not apply to these datasets.
**Upsampling:**
The rebalance sampling algorithm receives the concept graph as a concept co-occurrence matrix. The algorithm iterates through all cliques with an order of decreasing clique sizes to make sure we would not double compensate for the imbalance, e.g., 3-cliques would impact the already-balanced 2-cliques if we operate in a bottom-up fashion. For each iteration, it retrieves all cliques of concepts of size k along with their corresponding frequencies with each class. The algorithm identifies the maximum co-occurrence count among all classes for each combination and checks if any class is under-represented by comparing its count with the maximum. If a class is under-represented, the algorithm computes the number of synthetic samples needed to balance the representation and adds this information to the results list. This process continues for all combinations and classes until all clique sizes have been processed. The output of the algorithm is a list of queries specifying the class, concept combination, and the number of samples needed to balance the dataset. We will include a pseudo-code in the final version for better readability.
---
Rebuttal Comment 1.1:
Comment: Reviewer TVhQ, do you have any additional questions or feedback? | Summary: This paper points out inherent biases in visual dataset. To address this issue, the paper proposes a new framework called ConBias, which proceeds in three steps: (1) Concept Graph Construction, (2) Concept Diagnosis, and (3) Concept Debiasing. Using concept metadata in the dataset, concept graph is constructed. After identifying imbalanced clique sets, ConBias utilizes Stable Diffusion to generate images containing under-represented concepts across classes. The experimental results show the effectiveness of the proposed framework to mitigate dataset biases.
Strengths: 1. The paper proposes a new concept graph-based framework, which is easy to construct using metadata and diagnose biases in visual datasets.
2. A new approach to generate unbiased images using stable diffusion robustly enhances the overall performance.
3. The experimental results across different datasets show reasonable performances compared to the state-of-the-art method.
Weaknesses: 1. The main concern is that the proposed framework should require metadata to construct a graph. It can be a problem to apply this method directly in many cases where visual datasets do not include any metadata.
2. I do not agree that the existing approach utilizing LLMs is no reliable way, since (1) there is also a possibility that the given metadata has wrong or biased information, and (2) as recent LLMs become powerful, the quality of domain descriptions is reliable (rather automatically improving the quality as time goes by). To sum up, there is no guarantee that the proposed method is always less biased compared to the existing method.
3. The processes to construct graph and diagnose bias do not look technically novel. The overall process is primarily based on a counting mechanism, not a learning-based method.
4. The authors tested the proposed framework on three benchmark datasets. However, the used datasets look like toy experiments, i.e., all the tasks are binary classification tasks and easy to identify biases. I wonder that the proposed method still constructs and detects inherent, complex biases in large and multi-class tasks, e.g., ImageNet.
5. The experimental results in the tables are quite different from the values displayed in the ALIA paper [8]. If it is true, can you explain the reasons?
6. There are no comparisons about computational complexity between the existing methods and the proposed one. Please clarify how the costs are changed per the number of total concepts.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Please describe total numbers of generated graph nodes and edges.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors address limitations and broader impact well in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments.
**Metadata:**
As we discuss in the main paper (lines 324-325), we assume the availability of reliable ground truth concept sets. Such annotations already exist for the datasets we investigate - Waterbirds, UrbanCars, and COCO-GB. We agree that unreliable ground truth concept sets would hinder generalization abilities, but this assumption is not dissimilar to the assumption of reliable ground truth labels in classification tasks. Moreover, the reliance on ground truth concept sets, sometimes referred to as concept banks, have also been considered in [A]. Ground truth concept sets serve as auxiliary knowledge bases and provide human level interpretability to the task at hand. We look forward to using partially available concept sets in future work (lines 324-325). We will update the paper to reflect this important point raised by the reviewer.
[A] Wu, Shirley, et al. "Discover and cure: Concept-aware mitigation of spurious correlation." In ICML, 2023.
**LLMs:**
With regard to the reviewer’s first point, we do not argue in the paper that LLMs are not reliable for metadata, and we agree with the reviewer that LLMs are useful for a variety of tasks. The metadata that we use comes from ground truth concepts that are available in all the datasets tested in our work. We do not use LLMs to extract metadata for graph construction.
Instead, we argue that LLMs are unreliable in the generation of fair, balanced data (lines 36-39). This is an issue that has been acknowledged by the authors of ALIA as well (please refer to Section 6 in the ALIA paper). This is the issue we overcome with ConBias: Instead of using an off-the-shelf LLM to generate diverse image data, we use a controllable and interpretive knowledge graph that encodes the class-concept imbalances in the data.
**Novelty:**
The core intuition of ConBias is that we want a controllable and interpretable way to generate debiased data. In a learning based framework, these two requirements are compromised since a gradient based optimization technique relies on parameters that may deem the model to be a black box.
ConBias is not simply a statistical co-occurrence counting mechanism. In fact, in lines [87-90], we mention that there are diagnostic tools that exist today that compute such object co-occurrence statistics. The novelty of our method is in the creation of a graph that encapsulates multiple class-concept co-occurrences. The analysis of the class-concept graph cliques is crucial to the graph construction. In Table 2, we show the benefit of using the graph structure as opposed to a simple counting mechanism based on statistical co-occurrences. Additionally, our method is novel as to the best of our knowledge, no existing works leverage a concept co-occurrence based graph to simultaneously diagnose and debias datasets. Our results, demonstrated in Table 1-4, confirm the usefulness of the graph clique balancing strategy.
Broadly, we believe that if we wish to debias datasets, we need human-level intuition on what concepts are biased, and what concepts need to be debiased in a principled fashion, corresponding to human-level intuition. ConBias provides this intuition (please refer to Figure 3 and Figure 4 in the main paper, and Section D in the supplementary section). A black-box learning based approach, on the other hand, would hinder these capabilities.
**Extension to multi-class tasks:**
Since the reviewer raised the example of Imagenet, in the attached PDF above, we present new results on diagnosing multiple classes in Imagenet-1k. We discovered some interesting spurious correlations uncovered by ConBias. As such, ConBias can be seamlessly used for datasets of increasing complexity. We will update the paper to include these results.
Waterbirds and UrbanCars, while binary, are the de facto datasets used in the literature for evaluating single-shortcut and multi-shortcut learning in classifiers. These are the most commonly used datasets for benchmarking purposes. COCO-GB is a subset of the MS-COCO dataset that includes real-world data consisting of humans and everyday objects in the wild.
We include the motivations behind the datasets and other details in Section 4.1 and Supplementary Section B. For the reviewer’s convenience, we also include three references here that may clarify the use of these datasets:
- Li, Zhiheng, et al. "A whac-a-mole dilemma: Shortcuts come in multiples where mitigating one amplifies others." In CVPR. 2023.
- Sagawa, Shiori, et al. "Distributionally Robust Neural Networks." In ICLR 2020.
- Tang, Ruixiang, et al. "Mitigating gender bias in captioning systems." In WWW. 2021.
**Results of ALIA:**
We invite the reviewer to refer to Section G of the Supplementary material, where we present the confidence intervals in addition to the averaged results in Table 1. The results presented are within the confidence intervals as presented in the ALIA paper, who also average results over three runs.
**Computational complexity:**
We invite the reviewer to refer to Section H in the Supplementary section for details on runtime. In general, given $K$ classes and $C$ concepts, the graph clique enumeration is expected to grow in $O(exp(|K+C|)$. However, in practice, we find that constraining clique sizes to $k<=4$ leads to interpretable bias combinations, with no significant effect of the exponential runtime. We agree that this is a heuristic, and more efficient clique enumeration methods can be developed (lines 314-315). We will update the paper to reflect this point raised by the reviewer.
**Total number of graph nodes and edges:**
For Waterbirds, there are 66 nodes and 865 edges in the graph. For UrbanCars, the graph contains 19 nodes and 106 edges. For COCO-GB, there are 81 nodes and 2326 edges in the graph. We will update the paper to include these additional details.
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses from the authors. Some ambiguities I raised have been addressed. However, I still have concerns about the authors’ responses, so I keep my initial rating. For example, regarding metadata, I mean the graph construction in the inference time, where no label exists. I know that the datasets the authors used in the manuscript have metadata, but in many real-world scenarios, most data samples are unlabeled, where ConBias cannot apply. Moreover, I think the authors misunderstood my point in LLMs. I did not mean that LLMs are not reliable for metadata, or the authors use LLMs to extract metadata for graph construction.
---
Rebuttal 2:
Comment: We thank the reviewer for the comments. Here, we address the points on metadata and LLMs.
**Metadata**
In the absence of ground truth metadata, one can leverage large multimodal models (LMMs), for instance in segmentation or open-vocabulary detection, and generate such concepts. However, there remains the possibility of noisy metadata that may contain unreliable artefacts, in addition to the biases within these models themselves. In this work, we restrict ourselves to available, high-quality, ground truth concepts, to showcase the usefulness of our framework. Pursuing open-vocabulary models to generate metadata is an interesting direction of future work. In general, ConBias is not constrained by how the metadata is obtained, but in the quality of metadata obtained. Future developments in LLMs/LMMs that can generate high quality concept metadata can be seamlessly integrated into the ConBias framework. We will be sure to update the paper with this discussion.
Our core contribution with ConBias is not the metadata stage, which we assume to be available and high-quality, similar to other works in the past [A, B]. Our core contribution is the diagnosis and debiasing of datasets with ConBias, which leads to significant improvements on multiple datasets with respect to the current state-of-the-art. As requested by the reviewer, we have also provided diagnosis results on a more complex dataset such as Imagenet-1k.
[A] Wu, Shirley, et al. "Discover and cure: Concept-aware mitigation of spurious correlation." In ICML, 2023.
[B] Lisa Dunlap, Alyssa Umino, Han Zhang, Jiezhi Yang, Joseph E Gonzalez, and Trevor Darrell. Diversify your vision datasets with automatic diffusion-based augmentation. Advances in Neural Information Processing Systems, 36, 2024
**LLMs**
We reiterate that, as mentioned in lines 36-39, relying on LLMs to generate diverse, unbiased descriptions is problematic since LLMs themselves may be biased, and such generation is not controllable. This issue of relying on LLMs has also been addressed in the ALIA paper (Section 6) [A], and it is precisely this issue that we fix with ConBias. By leveraging the concept graph, ConBias can generate a debiased dataset in a controlled and interpretable manner.
[A] Lisa Dunlap, Alyssa Umino, Han Zhang, Jiezhi Yang, Joseph E Gonzalez, and Trevor Darrell. Diversify your vision datasets with automatic diffusion-based augmentation. Advances in Neural Information Processing Systems, 36, 2024
---
Rebuttal Comment 2.1:
Comment: Reviewer XqYX, do you have any additional questions or feedback? | Summary: This paper introduces a concept graph-based framework to diagnose and mitigate biases in visual datasets by representing datasets as knowledge graphs of object co-occurrences. The approach involves constructing a concept graph, diagnosing concept imbalances, and debiasing by generating images with under-represented concept combinations. This method enhances dataset balance and improves generalization performance across multiple datasets.
Strengths: The paper presents a structured and controllable method for diagnosing and mitigating spurious correlations by representing datasets as knowledge graphs of object co-occurrences.
Experimental results demonstrate that balanced concept generation in data augmentation enhances classifier generalization across multiple datasets, outperforming baseline methods.
This manuscript is written well, it’s easy to read and follow.
Weaknesses: This manuscript seems to be an engineering technical report about the data augment and the technical contribution is weak for an academic paper. The method of constructing a knowledge graph to determine object co-occurrences is overly direct; statistical analysis of objects and frequency comparison can achieve similar results.
• What is the advanced intuition behind constructing a knowledge graph to determine object co-occurrences? How does it compare to previous methods in terms of advantages?
• The paper lacks details on the concept set annotations. How might the quality of these annotations affect the performance of the proposed method?
• Why are the results of ALIA on Table 1 difference with ALIA[1].
[1] Lisa Dunlap, Alyssa Umino, Han Zhang, Jiezhi Yang, Joseph E Gonzalez, and Trevor Darrell. Diversify your vision datasets with automatic diffusion-based augmentation. Advances in Neural Information Processing Systems, 36, 2024
There is a typographical error on line 198: a space is missing after the period.
Technical Quality: 2
Clarity: 3
Questions for Authors: see above
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes, the authors discuss about limitations in section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback.
**Intuition and advantage of constructing knowledge graph:**
ConBias is not simply a statistical co-occurrence counting mechanism. In fact, in lines [87-90], we mention that there are diagnostic tools that exist today which compute such object co-occurrence statistics. The novelty of our method is in the creation of a graph that encapsulates multiple class-concept co-occurrences. The analysis of the class-concept graph cliques is crucial to the graph construction. In Table 2, we show the benefit of using the graph structure as opposed to a simple counting mechanism based on statistical co-occurrences. Additionally, our method is novel as to the best of our knowledge, no existing works leverage a concept co-occurrence based graph to simultaneously diagnose and debias datasets. The advantages over previous methods is in the controllability and interpretability of debiased data generation. Our results, as demonstrated in Table 1 and Table 4, show significant advantages in generalization over previous approaches.
**Details of concept set annotations:**
We described the details for all the datasets used in Section B and Section C of the appendix and the supplementary materials. In Table A2 in appendix, we detailed the list of concept sets: the Waterbirds dataset has 64 unique concepts; the UrbanCars dataset has 17 unique concepts; the COCO-GB dataset has 81 unique concepts. All the concepts are from the MS-COCO dataset. We will incorporate these details into the main paper in the final version.
**Effect of the quality of the annotations:**
We assume the concept sets to be available for graph construction. One can interpret this availability as the same as the assumption that ground truth labels are useful for classification tasks. Just as unreliable ground truth labels can hinder classifier performance, the concept sets also need to be of high quality and reliability to ensure strong generalization. We mention this assumption in the final section of the main paper, and are looking forward to future work that investigates unreliable, partially available concept sets. We will update the paper to clarify this important point raised by the reviewer.
**Results of ALIA:**
We invite the reviewer to refer to Section G of the appendix, where we present the confidence intervals in addition to the averaged results in Table 1. The results presented are within the confidence intervals as presented in the ALIA paper, who also average results over three runs.
**Typos in L198:**
Thank you and we will correct it in the final version.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. It helps me to solve my concerns. I keep my score: bdl accept for this work. | Rebuttal 1:
Rebuttal: We thank the reviewers for providing valuable feedback on our work. We are glad that they found our idea "original" (R3); our diagnosing and debiasing method "structured and controllable" (R1), "novel, significant" (R3); our graph-based framework "a new approach to generate unbiased images" (R2), "robust" (R2); our graphs and illustration "informative" (R3); our experiments and justification "robust" (R3), "thorough" (R4); our paper "well-written" (R1, R4).
In this rebuttal, we separately address each reviewer’s remaining comments as follows. Also, the attached PDF contains results from diagnosis experiments on Imagenet-1K, as requested by reviewer R2. We will incorporate all the feedback to the best of our abilities.
R1: Reviewer **xMok**; R2: Reviewer **XqYX**; R3: Reviewer **TVhQ**; R4: Reviewer **BpQx**.
Pdf: /pdf/e0031c7bc8e99cd86ffd26550dcae2767103222b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
High-probability complexity bounds for stochastic non-convex minimax optimization | Accept (poster) | Summary: This paper proves a high probability convergence bound and almost sure convergence for stochastic smoothed AGDA method for minimax problems in nonconvex-PL setting. This is the first high probability guarantee and almost sure convergence guarantee in this nonconvex setting.
Strengths: This is the first high probability convergence guarantee and almost sure convergence guarantee in this nonconvex setting, going beyond convex setting and convergence in expectation.
The paper also presents a concentration bound (Theorem 9) which is of independent interests and this bound allows us to obtain high probability convergence results from expectation convergence results for minimax problems.
Weaknesses: See Questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The algorithm and analysis are built upon the approaches in 63 and 68. Could the authors provide more insights on the novelty of the proof compared to 63 and 68?
2. Is it possible to extend this analysis to other settings like the nonconvex-concave setting?
3. Can this analysis be extended to other algorithms such as stochastic GDA?
4. Missing references:
[1]. Shen et al Stochastic gradient descent ascent for federated minimax optimization
[2] Zheng et al Universal gradient descent ascent method for nonconvex-nonconcave minimax optimization
[3]. Ozdaglar et al What is a good metric to study the generalization of minimax learners
[4] Li et al Nonsmooth-nonconvex-nonconcave minimax optimization: primal-dual balancing and iteration complexity analysis
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for all the insightful comments. Below are our responses:
**1-** [68] analyzes sm-AGDA in the deterministic nonconvex-concave setting, and introduces a Lyapunov function to establish a complexity result. In a follow-up work, [63] leverages the same Lyapunov function to adapt the analysis to the stochastic nonconvex-PL setting, and establishes $O(\ell\kappa / \epsilon^2 + \ell\kappa^2 \sigma^2 / \epsilon^4)$ complexity; $\kappa$ is the condition number, and $\sigma^2$ bounds the variance.
Some Lyapunov arguments in our proof are built upon these analyses to establish our high-probability (HP) bound. In fact, it is a common strategy in the literature on HP bounds to build upon existing deterministic/expectation analyses [81,82]. Yet the key difficulty remains in controlling how the noise from gradients accumulates along iterations. While one may intuitively expect that the gradient estimates being subGaussian should produce a light-tail distribution for sm-AGDA’s output, achieving a precise balance between bias and quantile trade-off in the complexity bound usually turns out to be highly nontrivial – even in the strongly convex-strongly concave (SCSC) setting [34].
In the nonconvex-PL case, we show that for finding an $\epsilon$-solution w.p. at least $1 - q$, the complexity of sm-AGDA is $O(\ell\kappa / \epsilon^2 + \delta^2 \log(1/q) \kappa / \epsilon^2 + \ell\kappa^2 \delta^2 / \epsilon^4)$ under the subGaussian assumption on the gradients. This bound is tight: for the deterministic case, i.e., when $\delta = 0$, we recover the existing $O(\kappa / \epsilon^2)$ complexity bound for sm-AGDA; and the quantile dependence of our bounds scales favorably with $\epsilon$ and $q$. Our bound is tighter than those obtained by the standard expectation to HP conversions, as we explain in our general response to the reviewers above.
Our main technical novelty behind this result is the identification of stochastic processes (see $ \tilde{A}_t, \tilde{B}_t, \tilde{C}_t,$ and $\tilde{D}_t$ in Cor. 8) that exhibit the desired concentration properties as stated in our generic concentration result (Thm. 9). Constructing these processes, e.g., $\tilde{C}_t$, is non-trivial, requiring careful measurability considerations along with deriving some new convex inequalities to extend the existing analysis for deriving expectation guarantees [63] and for the deterministic setting [68] of sm-AGDA. These challenges are specific to HP analyses and do not occur in the less sensitive expectation bounds from [63], where noise can be mitigated using the properties of conditional expectation. For a discussion on the challenges related to measurability in the Lyapunov analysis of momentum-averaged algorithms, we refer to [34], which obtained HP bounds in the SCSC setting. Similarly, sm-AGDA employs momentum averaging in the non-convex setting whenever $\beta > 0$, which complicates the analysis with measurability issues comparable to those arising in [34].
Finally, a surprising result of our analysis, contrary to naive approaches leveraging Markov’s inequality by running the method several times in parallel, is that the cost of getting a HP bound only affects negligible terms of the initial expectation bounds giving $ O(\epsilon^{-4} + \log(1/q) \epsilon^{-2}) $ complexity, instead of multiplying the whole bound by $\log(1/q)$ to get a worse bound of $O(\log(1/q) \epsilon^{-4})$ - see our general response in that regard. This feature departs from standard guarantees in the SCSC setting [34].
**2-** In our proof, we use the PL property of the dual to achieve an appropriate Lyapunov descent using stoc. gradients. It is not clear whether our proof directly extends to the nonconvex-concave setting. The only convergence analysis we know for sm-AGDA in the nonconvex-concave setting is given in [68]; but, this analysis is given only for the deterministic case and requires the compactness of the dual domain. Yet, we believe that combining our analysis techniques with some deterministic inequalities from [68] may allow to extend our results to the nonconvex-concave setting. We will consider this as part of future work as it would require a detailed analysis with some fundamental differences from the current work.
**3-** In stoc. GDA, the primal and dual updates can be implemented either in an alternating fashion or in a Jacobi style where both updates are performed simultaneously. The alternating case is referred to as stoc. alternating GDA (AGDA). By slightly modifying our proof, we can show a similar HP result holds for stoc. AGDA, with a degraded complexity of $O(\ell \kappa^2 / \epsilon^2 + \delta^2 \log(1/q) \kappa^2 / \epsilon^2 + \ell \kappa^4 \delta^2 / \epsilon^4)$. Our stoc. AGDA’s complexity has extra $\kappa$ factors on the bias and variance/quantile error parts, respectively, when compared to sm-AGDA bound we derive in our paper. To obtain the stoc. AGDA bound, we apply Thm. 9 to the modified stochastic processes: $\tilde A_t = \tau_1 V_t, \tilde B_t = \frac{\tau_1}{32} \|\| \nabla \Phi(x_t) \|\|^2 + \frac{\tau_2}{64} \|\| \nabla f(x_t, y_t)\|\|, \tilde D_t = \kappa \ell\tau_1^2\|\| \Delta_t^x \|\|^2 + \ell\tau_2^2 \|\| \Delta_t^y \|\|^2/8$,
$$\tilde C_t = \left\langle \left( (1 + \alpha) \frac{2\ell + \ell \kappa}{2} \tau_1^2 - \alpha \left(\tau_1 + l \tau_1^2 - \tau_2 \ell^2 \tau_1^2 \right) \nabla_x f(x_t, y_t) - (1 + \alpha) \tau_1 \nabla \Phi(x_t) \right), \Delta_t^x \right\rangle - \left\langle \alpha (\tau_2 - \ell \tau_2^2) \nabla_y f(x_t, y_t), \Delta_t^y \right\rangle,
$$
where $V_t = (1 + \alpha) \Phi(x_t) - \alpha f(x_t, y_t)$ is a Lyapunov function, $\alpha$ is a particular constant. Jacobi-style stoc. GDA is known to be divergent even in the convex-concave setting [80], so we did not consider it. We also added the missing references.
**References**
[80] Zhang, Wang, Lessard, Grosse. AISTATS 2022.
[81] Harvey, Liaw, Plan, Randhava, COLT 2019.
[82] Rakhlin, Shamir, Sridharan, ICML 2012.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I will keep my score. | Summary: This paper investigates the stochastic non-convex minimax optimization problem $\min_x \max_y f(x,y)$ where the function $f$ is smooth, then nonconvex in $x$ and PL in $y$. The authors presented a high probability analysis for the smoothed alternating gradient descent ascent (sm-AGDA) method. This analysis is based on a Lyapunov function from prior work for non-convex concave setting. Also, under a light-tail assumption on the gradient noise, the authors developed a concentration result. Finally, they include experiments of nonconvex-PL setting with synthetic data and then distributionally robust optimization problems with real data.
Strengths: The authors provided theoretical proofs and experiments to support their claims. The analysis seems to be solid. The authors compared their complexity with other high probability bounds results.
Weaknesses: 1. While the authors cited some results in expectation for the similar setting, I think it is important to compare the (Markov) high probability bounds obtained by these expectation results with the results obtained in this paper. If the results are not better, then the contribution is limited.
Also I don't get what the authors meant by "More precisely, we focus on a purely stochastic regime in which data streams over time which renders the use of mini-batch schemes or running the method in parallel impractical; therefore, approaches based on Markov’s inequality [59] are no longer applicable." (line 99-101).
2. In the theorems and lemma, the values $\tau_1, \tau_2, \tau, \bar{\tau}$ is really confusing. Do you need to pick specific values for the step sizes, if so, what is $\tau_2$? Are $\tau$ and $\bar{\tau}, \tau_1$ the same?
Update: I thank the authors for the rebuttal. I keep my recommendation the same, as changing to high probability bound from expectation results is somewhat expected.
Technical Quality: 2
Clarity: 2
Questions for Authors: Why in the experiments the authors compare with SAPD+ and SMDAVR, not the methods in Table 1?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the referee for their time invested in reviewing our paper. Below are our responses to the questions raised.
**Weaknesses**
**1-** Please see our general response to the reviewers above in the window titled **General response: Tightness of our high-probability bounds,** where we explain why our bounds are significantly tighter than any naive bound based on applying Markov's inequality to the existing results in expectation. As we discuss in detail there, applying Markov's inequality yields significantly loose bounds in terms of their dependence on the quantile parameter $q \in (0,1)$.
A more advanced approach - see e.g. [19,59] - involves running $O(\log(1/q))$ copies of the sm-AGDA algorithm together with a postprocessing step requiring additional $\mathcal{O}(1/\epsilon)$ samples for each copy to identify an $\epsilon$-stationary point with high probability.
The overall cost of this approach is of order $O(\log(1/q)) \cdot \mathcal G(\epsilon) + O(\log(1/q)) \cdot O(1/\epsilon) $, where $\mathcal G(\epsilon)$ denotes the complexity in expectation, assuming stochastic gradients have a variance bounded by $\sigma^2$. This results in an unavoidable dependence in the failure probability $q$ of order $\log(1/q) \epsilon^{-4}$ which is significantly worse than our $\log(1/q) \epsilon^{-2}$ dependance.
Moreover, approaches like [19,59] can remain interesting in practice when one is allowed to have the $O(\log(1/q))$ runs of sm-AGDA processed in parallel, but loses all interest with respect to our approach when parallelization is impractical or impossible. Indeed, the approaches that require $O(\log(1/q))$ parallel runs can be impractical for the streaming data setting we consider where the data arrives one by one in a streaming fashion; the aforementioned impracticality is mainly because of the fact that the parallel runs would have to wait for the arrival of new data points to be able take one step. This is what we meant to convey in our sentence "*More precisely, we focus on a purely stochastic regime in which data streams over time, rendering the use of mini-batch schemes or running the method in parallel impractical, therefore ... [59] are no longer applicable*". In the revised version, we clarified this sentence and added comparisons with naive approaches based on the Markov inequality.
**2-** While $\tau_1$ and $\tau_2$ are primal and dual stepsizes, $\bar{\tau}$ corresponds to a particular choice of the primal stepsize where we set $\tau_1=\bar{\tau}$ in our main complexity result. To avoid confusion, we will remove $\bar{\tau}$ and use only $\tau_1$ instead. The symbol $\tau$ is used in our concentration inequality, to emphasize that it can in principle apply to any positive scalar (not necessarily the primal stepsize); however in practice we choose $\tau=\tau_1$. Therefore, to simplify the notation we will also replace $\tau$ with $\tau_1$ in the revised version.
**Questions**
**1-** The methods provided in the table are mostly for convex/concave problems and do not provide any guarantees for the stochastic nonconvex/strongly convex (NCSC) problem we consider in this experiment. There are also some VI results in the table, but they do not apply to NCSC problems either. The point of the table was to summarize the existing HP results and emphasize that no HP bounds were known for NCSC problems prior to our paper. SAPD+ and SMDAVR algorithms have in expectation guarantees for NCSC problems, but they do not admit HP guarantees. SAPD+ and SMDAVR were also among the most competitive ones tested on this example in [70]. This is the main reason why we compared sm-AGDA with these two algorithms even if they do not admit HP guarantees.
We hope these responses satisfy the reviewer and lead to an improvement in our score. | Summary: The authors build upon earlier works to prove a high probability upper bound, that is linear in variance of the stochastic gradients, on the number of stochastic gradient calls of smoothed alternating GDA method.
Strengths: The paper is well written with clear theorems, proofs and numerical results.
Weaknesses: Focusing only on their concentration bound, which they claim is of independent interest and is the crux of their high probability bounds: the bound in Theorem 9 (which is fairly customized and not easy to state) follows like any other chernoff/chebychev type bound - using Markov type inequality after exponentiation; and I don't see any novelty in their theoretical ideas/techniques here. Perhaps I have missed something, if so the authors can clarify it.
Technical Quality: 3
Clarity: 3
Questions for Authors: see weaknesses
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Theory paper, no concerns
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the referee for their time invested in our paper. Below are our responses to the weaknesses raised.
While our concentration inequality (Thm 9) seems tailored to the analysis of sm-AGDA, we can argue that it can also aid in deriving high probability bounds for many other nonconvex first-order methods that outputs a randomized iterate. Indeed, most Lyapunov arguments in the nonconvex setting are built upon telescoping quantities in line with our Thm 9. This includes stochastic alternating GDA for NCPL minimax problems and optimistic GDA [34] for strongly convex-strongly concave problems. The challenge lies in finding the appropriate stochastic processes $(\tilde{A}_t, \tilde{B}_t, \tilde{C}_t, \tilde{D}_t)$, as done in Cor 8 for sm-AGDA. This is akin to prime factorization—easy to verify but hard to come up with. Thus, our significant technical novelty is devising these processes for sm-AGDA such that the concentration result in Thm 9 applies. We mentioned Thm 9 to be of independent interest, not because it is particularly hard to prove but because it acts as a useful inequality potentially applicable to other algorithms like stochastic AGDA and stochastic gradient descent (SGD).
For example, stochastic AGDA is a special case of the sm-AGDA algorithm we analyzed, obtained by setting the parameters $\beta=p=0$. Thm 9 leads to a new complexity result of $O(\ell \kappa^2 / \epsilon^2 + \delta^2 \log(1/q) \kappa^2 / \epsilon^2 + \ell \kappa^4 \sigma^2 / \epsilon^4)$ for stochastic AGDA, if we were to apply Thm 9 to processes $\tilde A_t = \tau_1 V_t, \tilde B_t = \frac{\tau_1}{32} \|\| \nabla \Phi(x_t) \|\|^2 + \frac{\tau_2}{64} \|\| \nabla f(x_t, y_t)\|\|, \tilde D_t = \kappa \ell\tau_1^2\|\| \Delta_t^x \|\|^2 + \ell\tau_2^2 \|\| \Delta_t^y \|\|^2/8$ together with an appropriately modified Lyapunov function $V_t$ and $\tilde C_t$ term. Similarly, by adapting these processes and by removing the dual terms that relate to $y$ variable, we can obtain complexity results for SGD.
Thanks to the novelty of our approach, we can also obtain tight bounds as we discuss next. Specifically, our complexity bound for sm-AGDA for finding an $\epsilon$-stationary solution w.p. at least $1-q$ is $O \left(\ell \kappa/\epsilon^2 + \delta^2 \log\left(1/q\right) \kappa/\epsilon^2 + \delta^2 \ell \kappa^2/\epsilon^4\right),$ where $\kappa = \ell/\mu$ is the condition number, and $\delta^2$ is the subGaussian proxy tied to the stochastic gradient estimates $\widetilde{\nabla}_x f, \widetilde{\nabla}_y f$. These bounds are tight in the sense that when $\delta=0$, we recover the deterministic $O(\kappa/\epsilon^2)$ complexity for sm-AGDA; and the quantile term scales favorably with $\epsilon$ and $q$. One may expect the iterates to have light-tail properties when the gradient noise is light-tailed, but quantifying the proxy/variance of the iterates is not straightforward since it depends on various algorithm and problem parameters such as $\mu,\ell,\tau_1,\tau_2,\beta$, etc. Furthermore, turning in-expectation estimates to high-probability (HP) estimates via Markov inequality would result in much worse estimates than ours, as we explain in the window titled **General response: Tightness of our high-probability bounds**. As we discuss in this window, a surprising result of our analysis, contrary to advanced approaches leveraging Markov's inequality by running the method several times in parallel, is that the cost of getting an HP bound only affects negligible terms of the initial expectation bounds giving $O(\epsilon^{-4}+\log(1/q)\epsilon^{-2})$ complexity, instead of multiplying the whole in-expectation bound by $\log(1/q)$ in which case one gets a much worse $O(\log(1/q)\epsilon^{-4})$ complexity. This feature departs from standard guarantees developed for the last iterate in the SCSC setting [79,34].
Our bounds achieve good complexity by striking the right balance between the bias and variance/quantile terms in the complexity. This is accomplished by carefully defining the stochastic processes $\tilde{A}_t, \tilde{B}_t, \tilde{C}_t, \tilde{D}_t$ in our Cor 8 that exhibit the desired concentration properties as stated in our generic concentration result (Thm 9). In particular, these processes depend on the parameters $\tau_1, \tau_2,$ and $\beta$ in non-trivial ways, requiring parameter choice/design optimization to achieve our rate results. Naive choices would result in a much worse complexity bound in terms of its dependence on $\epsilon, \kappa, $ and $q$. In addition, constructing the processes $\tilde{A}_t, \tilde{B}_t, \tilde{C}_t, \tilde{D}_t$ requires careful measurability considerations along with deriving various inequalities (that carefully exploit the smoothness and PL properties of the objective) to extend the existing analysis for deriving expectation guarantees [63] and for the deterministic setting [68] of sm-AGDA. For a discussion on the challenges related to measurability in the Lyapunov analysis of momentum-averaged algorithms, we refer to [34], which obtained HP bounds in the strongly convex-strongly concave (SCSC) setting. Similarly, sm-AGDA employs momentum averaging using the parameter $\beta$ in the nonconvex setting and encounters comparable measurability issues. These challenges are specific to HP analyses and do not occur in the less sensitive expectation bounds from [64], where noise can be mitigated using the properties of conditional expectation. Therefore, substantial amount of new work is in fact needed to obtain our results.
To summarize, our bounds are highly non-trivial requiring significant novelty, where we get the state-of-the-art complexity for nonconvex/PL minimax problems by far. We would appreciate if the referee would consider raising their score.
**References:**
[79] Cutler, Drusvyatskiy, and Harchaoui. Stochastic optimization under time drift: iterate averaging, step-decay schedules, and high probability guarantees. NeurIPS, 2021.
---
Rebuttal Comment 1.1:
Comment: > We mentioned Thm 9 to be of independent interest, not because it is particularly hard to prove but because it acts as a useful inequality potentially applicable to other algorithms like stochastic AGDA and stochastic gradient descent (SGD).
I have increased the score, but please clarify the above in the main body of the paper, so that your contributions are clear. | Summary: This paper considers the important open problem of stochastic smooth nonconvex minimax optimization. This paper proposes single-loop stochastic GDA method, which was known to be practically desirable but had no theoretical complexity compared to other non-single-loop methods with better complexies on nonconvex minimax problems. The analysis in this work fills some of above gap, provides the first high-probability complexity for nonconvex minimax while assuming a PL condition on dual variable, and proves assuming light tailed stochastic gradients GDA converges to a near stationary point with a certain complexity. Numerical results on NCPL game with synthetic data and distributionally-robust optimization with real data shows the proposed sm-AGDA outperforms existing algorithms SAPD+ and SMDAVR.
Strengths: 1. This paper solves the important open problem of stochastic smooth nonconvex minimax optimization, proposes a single-loop GDA method called sm-AGDA, which is constructed by Lyapunov function for nonconvex-concave problems.
2. Assuming PL condition, this paper proved the first high-probability complexity bound on such single-loop algorithm on nonconvex minimax problem. The order of complexity is reasonably good.
3. Numerical experiments show the proposed method is practically superior to existing methods.
Weaknesses: 1. Font size at the end of Page 8, and entire Page 9 are smaller than the required size.
2. There is not much algorithm novelty from [68].
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. What are the existing best-known complexities of (possibly non-single-loop) algorithms on nonconvex minimax problem?
2. Can you compare in more details (overall problem class, overall algorithm, assumptions, complexity order dependencies) of your complexity result compared with [64]?
3. Does each of your assumption (PL condition, etc.) hold for both of your numerical experiments?
**Updates:
The authors' rebuttal addressed every question very well. I have increased my score from 6 to 7.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the referee for their time invested in reviewing our paper. Below are our responses to the questions raised.
**Weaknesses**:
**1-** The smaller font size is a typo due to misplaced bracket after a displayed equation. We will fix this issue; thanks for the good catch.
**2-** Our work extends sm-AGDA from [68] by allowing light-tail stochastic estimates $\widetilde{\nabla}_x f, \widetilde{\nabla}_y f$ in place of the exact partial gradients. This is relevant for large-scale optimization and machine learning where gradients are often estimated from streaming or random data samples. sm-AGDA's convergence in the stochastic setting was first established in expectation in [64]. Our work provides its **first** high-probability (HP) bounds for stochastic nonconvex-PL problems.
A substantial body of work has been dedicated to establishing HP guarantees for first-order methods. These works are often theoretical and involve complex arguments to achieve tight bounds, even in the minimization setting for methods like stochastic gradient descent (SGD) [77,78]. Our paper falls within this line of research.
Our main result (Thm 10) is a tight sampling complexity bound for finding an $\epsilon$-stationary solution with probability at least $1-q$ on nonconvex-PL problems. Our bounds are tight as when $\delta=0$ (i.e. gradients estimates are exact), we recover the deterministic complexity of sm-AGDA [68]; and the quantile part of our bound (involving $\delta^2$) scales favorably with $\epsilon$ and $\kappa$. Turning expectation estimates to HP estimates via standard Markov approaches would result in much worse estimates than ours, as we explain in our **General response: Tightness of our high-probability bounds**.
Our key technical contribution lies in identifying stochastic processes ($\tilde{A}_t, \tilde{B}_t, \tilde{C}_t, \tilde{D}_t$ in Cor 8) with desired concentration properties (Thm 9). Constructing these processes (e.g., $\tilde{C}_t$) is non-trivial, requiring careful measurability and deriving various inequalities (exploiting smoothness and PL properties) to extend the existing analysis for deriving expectation guarantees [63] and for the deterministic setting [68] of sm-AGDA. For a discussion on related measurability challenges in Lyapunov analysis of momentum-averaged methods, see [34] which obtained HP bounds in the SCSC setting.
**Questions**
**1-** In this paper, we considered stochastic smooth nonconvex-PL (NCPL) problems. For deterministic NCPL problems, AGDA and sm-AGDA have the complexity of $O(\kappa^2/\epsilon^2)$ and $O(\kappa/\epsilon^2)$ respectively for finding a point $(x,y)$ satisfying $\\|\nabla f(x,y)\\|\leq \epsilon$ as shown in [68]. Here $\kappa = L/\mu$ is the condition number, where $\ell$ is the Lipschitz constant of the gradient, and $\mu$ is the PL constant. For Catalyst-AGDA, [64] shows also the rate $O(\kappa/\epsilon^2)$ for deterministic NCPL problems. Regarding stochastic NCPL problems, the only existing result holds in expectation by [64]; the authors show that stochastic AGDA and stochastic sm-AGDA have the complexity of $O(\kappa^4/\epsilon^4)$ and $O(\kappa^2/\epsilon^4)$ respectively for computing a point $(x,y)$ which satisfies $\mathbb{E}\\|\nabla f(x,y)\\|\leq \epsilon$.
Our paper provides the first HP results for stochastic NCPL problems, showing that the stochastic sm-AGDA method can compute a point $(x,y)$ that satisfies $\|\|\nabla f(x,y)\|\| \leq \epsilon$ w.p. at least $1-q$ within $O\left(\ell\kappa^2\delta^2 \epsilon^{-4} + \kappa \epsilon^{-2} (\ell + \delta^2 \log(1/q))\right)$ stochastic gradient calls for any $q \in (0, 1)$.
**2-** In our paper, we consider the same algorithm (sm-AGDA) under the same assumptions (smooth NCPL problems) considered in [64]. Assuming that the variance of the stochastic gradient is bounded by a constant $\delta^2$, [64] shows a complexity of $O\left(\ell\kappa^2\delta^2 \epsilon^{-4} + \kappa \epsilon^{-2} \ell \right)$ stochastic gradient calls for computing $(x,y)$ that satisfies $\mathbb{E}\\|\nabla f(x,y)\\|\leq \epsilon$. In our work, we make an additional light-tail (subGaussian) assumption on the noise, and obtain a HP result showing that $O\left(\ell\kappa^2\delta^2\epsilon^{-4} + \kappa \epsilon^{-2} (\ell + \delta^2 \log(1/q))\right)$ stochastic gradient calls are sufficient to calculate a point $(x,y)$ satisfying $\|\|\nabla f(x,y)\|\| \leq \epsilon$ w.p. at least $1-q$. Such light-tail assumptions are commonly made for HP results even in convex minimax problems as we discuss in the introduction.
To our knowledge, this is the first HP result for a nonconvex minimax problem as we highlight in the introduction/Table 1. In particular, one should note that HP guarantees provide a much finer resolution than in-expectation ones, since gradients can be small on average (in expectation), while still being arbitrarily large with some positive probability. Finally, our HP bounds prove to be tight as we recover the in expectation one from [64] using the fact that the expectation of any random variable can be written as the integral of its quantile function from $p=0$ to $p=1$ : $\mathbb{E}[U] = \int_{p=0}^1 Q_p(U) dp$.
**3-** Yes. The NCPL assumption holds for the first experiment directly; here the primal problem is indeed non-convex and the dual is PL (and not strongly concave). The second experiment is in the non-convex/strongly concave (NCSC) setting as the regularizer $g(y)$ is strongly convex. Since SC implies the PL property; the second experiment in the NCSC setting is a special case of the NCPL setting. As such, our assumption of NCPL objectives hold for both of our experiments.
We hope these responses satisfy the reviewer and result in an improved score.
**References**
[77] A.R., O.S., K.S. Making gradient descent optimal for strongly convex stochastic optimization. ICML 2012.
[78] N. H., et al. "Tight analyses for non-smooth stochastic gradient descent." COLT 2019.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: I believe the authors addressed all 5 of my questions very well. I hope the authors will add those valuable discussions in the appendix. Based on the rebuttal, I update my score from 6 to 7. | Rebuttal 1:
Rebuttal: ### General response: Tightness of our high-probability bounds
We thank the referees for their time invested in reviewing our paper. Following the request from several referees, we discuss below the benefit of our approach against naive expectation-to-high-probability conversions using Markov's inequality.
Our bounds are significantly better than the simple high-probability bounds one can obtain using the existing results in terms of expectation; this point is explained in detail below. For the nonconvex-PL setting considered in our paper, the only expectation result we are aware of is by [64]. Assuming the variance of the stochastic gradient is bounded by $\delta^2$, for sm-AGDA the authors show a complexity of $\mathcal{G}(\epsilon):=O\left(\frac{\ell\kappa^2\delta^2}{\epsilon^4} + \frac{\kappa}{\epsilon^2}\ell \right)$ iterations/stochastic samples for computing an $\epsilon$-stationary solution in expectation, i.e., for computing $(x,y)$ that satisfies $\mathbb{E}\\|\nabla f(x,y)\\|\leq \epsilon$. Here, $\mu$ is the PL constant, $\ell$ is the gradient Lipschitz constant and $\kappa=\ell/\mu$. This result basically acts as an oracle that generates a sample $(\hat x,\hat y)$ satisfying $\mathbb{E}\\|\nabla f(\hat x,\hat y)\\| \leq \epsilon$ after $\mathcal{G}(\epsilon)$ iterations. We next present some naive approaches using the Markov inequality to obtain trivial high-probability bounds that hold w.p. at least $1-q$ for any given $q\in (0,1)$:
- The first approach involves applying the oracle to generate a solution $(\hat x,\hat y)$ such that $\mathbb{E}[\\|\nabla f(\hat{x}, \hat{y})\\|] \leq q \cdot \epsilon$. Then by naively using Markov's inequality, we get an $\epsilon$-stationary point w.p. at least $1-q$. The complexity bound for this approach is $\mathcal{G}(q\epsilon) = O\left(\frac{\ell\kappa^2\delta^2}{q^4\epsilon^4} + \frac{\kappa}{\epsilon^2q^2}\ell \right)$ and it scales badly with $q$.
- The second approach involves calling the oracle $m$ times to generate a group of $\epsilon/2$-stationary solutions $\\{(\hat x^{(i)}, \hat y^{(i)})\\}_{i=1}^m$. If the performance metric $\\| \nabla f (\hat x^{(i)}, \hat y^{(i)})\\|$ can be evaluated to $\epsilon$-accuracy for all $i\in\{1,\ldots,m\}$, i.e., for each $i$, we get a stochastic estimate $\tilde\nabla{f (\hat x^{(i)}, \hat y^{(i)})}$ such that $\\|\tilde\nabla{f (\hat x^{(i)}, \hat y^{(i)})}-\nabla{f (\hat x^{(i)}, \hat y^{(i)})}\\| \leq \epsilon$ with high probability, then the point $(\hat x^{(i^*)}, \hat y^{(i^*)})$ such that $i^* = \arg\min\\{\\|\tilde\nabla{f (\hat x^{(i)}, \hat y^{(i)})}\\| : i=1,\ldots, m\\}$ will satisfy the desired high-probability bound as long as $m = \Omega \left( \log \frac{1}{q} \right)$. This approach resolves the unfavorable dependence on $q$ present in the first approach. However, evaluating the stochastic estimate $\tilde\nabla{f (\hat x^{(i)}, \hat y^{(i)})}$ with the aforementioned properties for each $i$ typically requires $O(1/\epsilon^2)$ samples if light-tail assumptions are not made for the stochastic gradients [36, 59], [Thm 2.4,19] (using Markov inequality-type bounds or standard concentration inequalities such as [Lem 2.3,19]); on the other hand, the number of samples required for each $i$ is $O(\delta^2/\epsilon)$ if light-tail (subGaussian) assumptions are made for the stochastic gradients ignoring some logarithmic factors [Cor. 2.5,19]. In our paper, we consider general problems *without* assuming a finite-sum form for the objective and the data can be arriving in a streaming fashion. Therefore, the total complexity with the second approach when applied to the problems we consider becomes at least $\log\left(\frac{1}{q}\right) \mathcal{G}(\epsilon) + \log\left(\frac{1}{q}\right) O\left(\frac{\delta^2}{\epsilon}\right) = O\left(\log\left(\frac{1}{q}\right)\frac{\ell\kappa^2\delta^2}{\epsilon^4} + \log\left(\frac{1}{q}\right)\frac{\kappa\ell}{\epsilon^2} + \log\left(\frac{1}{q}\right)\frac{\delta^2}{\epsilon}\right)$.
- For unconstrained strongly convex-strongly concave (SCSC) problems, one can use the robust distance estimation technique together with also $m = \Omega \left( \log \frac{1}{q} \right)$ parallel runs; however, this approach does not apply to nonconvex minimax problems we consider in this paper [Sec 2,36].
Our high-probability result shows a complexity of $O\left(\frac{\ell\kappa^2\delta^2}{\epsilon^4} + \frac{\kappa}{\epsilon^2}\Big(\ell + \delta^2 \log(\frac{1}{q})\Big)\right)$ under the light-tail (subGaussian) assumption. This result is significantly better than the approaches mentioned above; indeed, it has not only the best (logarithmic) scaling with respect to $q$ but also with respect to $\epsilon$. In the second approach, the logarithmic term $\log(1/q)$ multiplies the high-order $O(1/\epsilon^4)$ term, whereas in our approach it only affects the second-order $O(1/\epsilon^2)$ term. More importantly, the approaches that require $\log(1/q)$ parallel runs can be impractical for the streaming data setting we consider where the data arrives one by one in a streaming fashion; the aforementioned impracticality is mainly because of the fact that the parallel runs would have to wait for the arrival of $\Omega(\log(1/q))$ new data points to be able take one step. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The authors derive a high probability bound for convergence of a stochastic gradient descent-ascent method over a nonconvex (PL class) of functions. Specifically, they analyze the sm-AGDA algorithm, which previously only had a bound in expectation, and show a similarly tight high probability bound when gradient noise is assumed subgaussian. The authors provide two experimental settings demonstrating that the distribution of these iterates over multiple runs converge as expected for the algorithm.
Strengths: * Technically solid paper that thoroughly analyzes the sm-AGDA algorithm and understandably interprets the results (e.g. remark 11). I don't think the assumptions are too strict, or at least not outside the norm for this type of analysis.
* Good comparison to other related works, especially Table 1 is useful to gauge the current state of theoretical results for this problem.
* Helps to move analysis closer to realistic nonconvex min-max optimization settings.
Weaknesses: * While I understand that the bound in Thm 10 over the average gradient norms motivates the random sampling of iterates in the sm-AGDA algorithm, it still seems like practically the best thing to do in Fig. 1 is to take the last iterate (or at least sample over some last window of iterates). Is there a straightforward way to come up with a similar high probability bound on the last iterate?
* I'd maybe like to see more discussion about how the analysis in this paper differs from that in the original sm-AGDA work [63]. At least for some previous work I've seen with high probability bounds for convex optimization, oftentimes the actual modification required to go from expectation to high probability is fairly minor (i.e. just boils down to bounding the summation of Subgaussian noise terms).
* I like the first set of experiments but a bit confused at the purpose of the second. I don't quite understand comparing concentration properties of other methods if the analysis only applies to one algorithm. Maybe if the authors highlighted what unique components of the sm-AGDA algorithm might give it better quantile convergence properties than others this would make more sense.
If the authors can address some of these concerns I am willing to raise my score to a more solid accept
Other Minor Notes:
* DRO abbreviation in line 238 not defined initially
* [63] and [64] appear to be referencing the same paper
* Line 306 and 311 appear to be referencing the wrong figure
Technical Quality: 3
Clarity: 3
Questions for Authors: * As mentioned in remark 11 this bound appears to be pretty tight compared to the bound in expectation. Do you think if this type of analysis was applied to other existing algorithms you would get similarly tight bounds?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors clearly state their assumptions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for all the feedback. Below we provide a point-by-point response to each of the weakness/question raised in order.
**Weaknesses**
**1-** The only known last-iterate results for nonmonotone VI problems require more restrictive conditions like local *quadratic growth* around critical points or 2nd-order sufficient conditions [28, 74]. Some existing worst-case analysis in convex-concave cases shows that complexity with averaging can be strictly better than the last iterate [76]. In conclusion, establishing a last-iterate convergence result in our nonconvex setting appears to be difficult and unlikely to offer better guarantees than our averaging approach.
**2-** We concur with the reviewer that noise from subGaussian gradient estimates are likely to produce a light-tailed distribution for the final iterate (or its average), but the main challenge remains achieving a precise balance within the bias-quantile trade-off of our complexity. Our bounds are tight as when $\delta=0$, we recover the deterministic $O(\kappa/\epsilon^2)$ bound for sm-AGDA; and the quantile part (i.e. terms involving $\delta$) of our bounds scales favorably with $\epsilon$ and $q$. Turning expectation estimates to high-probability (HP) estimates via Markov inequality would result in much worse estimates than ours, see our **General response: Tightness of our high-probability bounds**.
Our bounds achieve good complexity by striking the right balance between their bias and quantile terms. This is done by defining stochastic processes ($\tilde{A}_t, \tilde{B}_t, \tilde{C}_t$, and $\tilde{D}_t$ in Cor. 8) that show the desired concentration properties (Thm. 9). Constructing these processes (e.g., $\tilde{C}_t$) is non-trivial, requiring careful measurability considerations and deriving various inequalities (exploiting smoothness and PL properties) to extend the existing analysis for deriving expectation guarantees [63] and for the deterministic setting [68] of sm-AGDA. For a discussion on measurability challenges in the HP analysis of momentum-averaged algorithms, see [34] which obtained HP bounds in the SCSC setting. Similarly, sm-AGDA employs momentum averaging through its parameter $\beta$ in the nonconvex setting and faces similar measurability issues. These challenges are specific to HP analyses and do not occur in the less sensitive expectation bounds from [63], where noise can be mitigated using the properties of conditional expectation.
Obtaining HP guarantees in the nonconvex min-max problems is significantly harder than for nonconvex unconstrained minimization due to the necessary time-scale separation between the primal and dual, where primal updates leading to a descent in the primal function can amplify errors in the dual domain if parameters and descent properties are not carefully designed/analyzed [75,6,39].
**3-** In our experiments, both SAPD+ and SMDAVR needed smaller primal and dual step sizes and more tuning of them. Generally, for GDA methods, $\tau_1$ and $\tau_2$ must be chosen such that $\tau_1/\tau_2=\Omega(\kappa)$. For stoc. GDA [39] and stoc. AGDA [6], $\tau_1/\tau_2=\Theta(\kappa^2)$, but sm-AGDA does not require this thanks to its primal regularization which further allows working with larger primal and dual step sizes of the same order. For example, in sm-AGDA, one can use a primal step size $\tau_1 = O\left(\min\left(\frac{1}{\ell}, \frac{1}{\sqrt{T}}\right)\right)$ and a dual step size $\tau_2=\Theta(\tau_1)$. These factors contribute to sm-AGDA's practical success and to why it can admit better quantile convergence properties.
Reviewer is correct that our quantile bound analysis only applies to sm-AGDA. However, we compare sm-AGDA's concentration properties with SAPD+ and SMDAVR to provide a baseline for sm-AGDA results. SAPD+ and SMDAVR have state-of-the-art complexity bounds in expectation (SAPD+ [70] has better condition number dependency than sm-AGDA [63]). Yet, their quantile bounds have not been studied. We wanted thus to see sm-AGDA's practical performance against these methods.
sm-AGDA is a state-of-the-art method among single-loop algorithms for nonconvex-PL minmax problems in both deterministic [68] and in-expectation performance metrics [63]. For example, sm-AGDA offers better in-expectation guarantees than stochastic (alternating) GDA in terms of the complexity bound's dependence on the condition number $\kappa$. Single-loop methods like sm-AGDA are favored for their simplicity when compared to multi-loop algorithms, e.g., [70]; hence, they are easy to tune. These factors contribute to why sm-AGDA may achieve better quantile properties than other methods like stoc. GDA or multiple-loop algorithms.
**4-** We addressed the minor notes, thanks for the good catch.
**Questions**
**1-** Indeed, our approach can adapt to other algorithms for nonconvex-PL problems once one defines new stochastic processes $\tilde A_t, \tilde B_t, \tilde C_t, \tilde D_t$ (as in Cor. 8) and adjusts the potential function. For example, our analysis applies to stoc. AGDA, but results in worse HP bounds compared to sm-AGDA in terms of their dependence to the condition number $\kappa$, i.e., $\kappa$ factor in the sm-AGDA complexity becomes $\kappa^2$ for stoc. AGDA.
We hope these responses satisfy the reviewer and lead to an improvement in our score in terms of a solid accept.
## References:
[74] W. Azizian, F. Iutzeler, J. Malick, and P. Mertikopoulos. The rate of convergence of bregman proximal methods: Local geometry versus regularity versus sharpness. SIAM Journal on Optimization, 34(3):2440–2471, 2024.
[75] H. Li, F. Farnia, S. Das, and A. Jadbabaie. On convergence of gradient descent ascent: A tight local analysis. ICML, pages 12717–12740. 2022.
[76] N. Golowich, S. Pattathil, C. Daskalakis, and A. Ozdaglar. Last iterate is slower than averaged iterate in smooth convex-concave saddle point problems. COLT, pp. 1758–1784. 2020. | null | null | null | null | null | null |
Sample Complexity of Interventional Causal Representation Learning | Accept (poster) | Summary: Interventional causal representation learning aims to recover the causal graph for latent variables and simultaneously recover the latent variables. While identifiability has been established for the infinite sample regime, this is not practical. This paper aims to provide PAC-style bounds on identifiability (of both the graph and latent variables) in the finite-sample regime, assuming stochastic soft interventions and linear transformations from the latent to the observation space. Three prototypical algorithms (causal order estimation, graph estimation, and inverse transformation estimation) are proposed to estimate the graph and the latent variables, which pave the foundation, from which PAC-style bounds have been established. Numerical assessments are conducted to provide complementary insights into the sample complexity results.
Strengths: Identifiability analysis is often conducted in the infinite sample regime, limiting its practical applicability. This paper extends identifiability analysis to the finite sample regime, representing a significant step forward.
Weaknesses: The bounds are rather loose, which limits its practical applicability. Apart from that, these bounds do not provide sufficient guidance on designing better algorithms recovering latent variables and graphs. Having said so, I still welcome this type of theoretical papers.
Technical Quality: 3
Clarity: 3
Questions for Authors: I wonder if causal order estimation is necessary. Since the latent variables have yet to be estimated, their values and semantics are flexible. This implies that any given causal order could be acceptable, as these variables can be learned to fit the specified orders. Some edges would be pruned later, which can be done in the graph estimation stage.
Additionally, I wonder if there are other quantities beyond score differences and the correlation of score differences that could be used to recover the graph and the latent variables.
Furthermore, it would be beneficial if the analysis could assess which variables and which parts of the graph are more prone to error, and to what degree, providing feedback on the estimated/recovered variables and graphs.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: n.a.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the kind review and thoughtful questions. We address the raised questions as follows.
**Causal order estimation.** The reviewer is correct in that since the causal graph and variables are latent, we can choose any ordering of the nodes and latent variables first and then assign meanings later. In fact, in Line 98, without loss of generality, we say that $(1,\dots,n)$ is a causal order. However, after fixing this assignment, *we do not know* which node intervened in which environment (e.g., we do not know the causal order between the nodes intervened in $\mathcal{E}^i$ and $\mathcal{E}^j$). Hence, in our algorithms, we specifically identify each interventional environment with the corresponding intervened latent variable. For example, in Algorithm 3, we use the $m$-th interventional environment where node $k$ is intervened to recover the intervened variable $Z_k$ (up to some mixing) and assign it specifically to the $m$-th component of our latent variables estimate $\hat{Z}$.
**Approaches to CRL and score-based framework.**
- There exist different approaches to interventional CRL that don’t use score differences. One such example is [9], in which the authors present a sparsity-promoting VAE-based approach.
- Our motivation for investigating the score difference-based approach stems from the fact that it provides the strongest existing results for multiple settings. For the most general setting of nonparametric transforms and causal models, the only known provably correct algorithm is score-based [7]. For linear transformations, [4] provides the most general results (without restricting the causal models).
- On an interesting note, the approach of [6] for Gaussian latent models and linear transformations is based on precision matrix differences, which exactly correspond to score function differences between Gaussian distributions.
**Point-error analysis.** In our algorithms, we use thresholded decisions to obtain our graph estimate. Therefore, as a detection problem, the decisions that are most likely to fail are those with values that, in the noise-free setting, are closest to the decision boundary, i.e., the thresholds. In our algorithms, it is possible to keep track of the estimated values of these statistics as surrogates of the noise-free values and therefore assess how error-prone each decision is. That being said, the analysis in our paper is sequential, and errors can propagate. We consider the sample complexity analysis of approximate latent graph recovery, up to $k$-point errors, a critical future direction.
---
Rebuttal Comment 1.1:
Comment: I'm satisfied with the response and keep the score as 'accept.' | Summary: This paper establishes finite sample guarantees for recovering the latent causal graph under stochastic soft interventions and observations under a unknown linear transformation. Experiments are conducted and code is given.
Strengths: The paper is well motivated and the contributions are clear.
Weaknesses: I did not check the proofs in detail but I do not spot any glaring weaknesses.
The experiments are rather small scale (up to 10 nodes).
Technical Quality: 3
Clarity: 3
Questions for Authors: - What is $P_I$ in equation (12)?
- In practice, do we need to know $\eta^*$ or $\gamma^*$ to invoke your results? How do we know enough samples have been collected?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Nil
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the kind review and thoughtful questions.
- ${\bf P}_I$ is the permutation matrix for permutation $I$, the intervention order. We will include the definition in the notations paragraph.
- The current theorem statements are given for specific choice of thresholds that require the knowledge of unknown model parameters. However, this is **not necessary** for our analysis – it suffices to choose $\eta\in(0,\eta^*)$ and $\gamma\in(0,\gamma^*)$, which results in similar sample complexity bounds that depend on the thresholds $\eta$ and $\gamma$ directly. Since this flexibility is critical for the practicality of our results, we will modify our theorem statements to make them clear. We emphasize that the changes to the proofs and the results are minuscule: it suffices to replace all occurrences of $\eta^*/2$ and $\gamma^*/2$ in our sample complexity statements with $\min\\\{\eta,\eta^*-\eta\\\}$ and $\min\\\{\gamma,\gamma^*-\gamma\\\}$, respectively. For details, please see our response to reviewer jaoy on “Setting thresholds”.
- For implementation purposes, this means that we can select $\eta$ and $\gamma$ arbitrarily small. We highlight that requiring hyperparameters (i.e., $\eta$ and $\gamma$) to be restricted in an interval determined by unknown model parameters (i.e., $\eta^*$ and $\gamma^*$ in our algorithm) is standard (and generally inevitable) in the analysis of regularized algorithms in high dimensional statistics for even simpler problems like sparse support recovery. A common example is the error bound for regularized lasso problem in [HTW, Theorem 11.1], which requires the regularization parameter $\lambda_N$ to be greater than $2\\|{\bf X}^\top{\bf w}\\|_{\infty}/N$ – which depends on the _noise realizations_ ${\bf w}$.
[HTW] T.Hastie, R.Tibshirani, and M.Wainwright. Statistical learning with sparsity. _Monographs on statistics and applied probability_, 143, 2015.
- *Number of latent nodes*: We kindly note that the existing interventional CRL literature generally works with small graphs: $n=10$ is the largest graph size considered in the closely related single-node interventional CRL literature (e.g., for linear transformations [6] considers 5 nodes, [4] considers 8 nodes, and [11] considers 10 nodes. For linear causal models under nonlinear mixing Buchholz et al. (2023) consider 5 nodes. For fully nonparametric CRL, [13] considers 2 nodes, and [7] considers 8 nodes). Therefore, our experiments with up to 10 nodes are on par with the state-of-the-art interventional CRL.
S. Buchholz, G. Rajendran, E. Rosenfeld, B. Aragam, B. Schölkopf, and P. Ravikumar. Learning
linear causal representations from interventions under general nonlinear mixing. NeurIPS 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses!
I understand from your response that one can select $\eta$ and $\gamma$ to be arbitrarily small for implementation. However, from my understanding of your theoretical claims, this should blow up the required number of samples in order to conclude "something useful", right? Can you give concrete suggestions on how practitioners can use your algorithm while drawing meaningful conclusions?
---
Reply to Comment 1.1.1:
Comment: Thanks for the thoughtful question. In practice, we can include routines for estimating $\eta^*$ and $\gamma^*$. These estimates do not have to be highly accurate and even rough estimates suffice to choose reliable thresholds $\eta$ and $\gamma$. Specifically, based on estimates for $\eta^*$ and $\gamma^*$ we can choose “safe” thresholds, e.g., one-fourth of the estimates, so that (i) with high probability we satisfy the requirement $\eta < \eta^*$, and (ii) we can avoid collecting excessive samples and compromising the sample complexity bounds. For practical purposes, we can construct such estimates for both $\eta^*$ and $\gamma^*$ as follows.
- **Estimate for $\eta^{*}$:** We note that in Algorithm 1, at step $t$, all matrices investigated must have either rank $t$ or $t-1$. Therefore, the $t-1$-th eigenvalues of all of these matrices are non-zero, or equivalently we have $\geq\eta^*$. This observation lets us to iteratively build and refine an upper bound on $\eta^*$ during Algorithm 1, which we can use as a surrogate of $\eta^*$ in the subsequent algorithms 2 and 3. For Algorithm 1 itself, at time $t$, we can pick $k\in{\cal V}_t$ with minimal $t$-th eigenvalue—this is a threshold-free way of estimating the causal order, but might not be amenable to tractable analysis. In order to keep the analysis more amenable to sample complexity analysis, we can use the current statement of Algorithm 1 with thresholded rank statements.
- **Estimate for $\gamma^{*}$:** We first note that $\gamma^*$ definition in eq.(23) depends entirely on the ground truth decoder matrix $\bf G$ and its pseudoinverse. Secondly, we note that Algorithm 3 can be used independently of Algorithm 2 to generate an estimate $\bf H$ of ${\bf G}^\dagger$ up to some estimation error and possible mixing among rows. We can compute the value $\min\_{i\in[n]}\\|{\bf H}\_{i,:}\\|_2\cdot\\|{\bf H}^\dagger\_{:,i}\\|\_2$, which would be equal to $\gamma^*$ had ${\bf H} = {\bf G}^\dagger$, and use it as our estimate of $\gamma^*$. | Summary: This paper provides finite-sample identifiability results for recovering the latent causal graph and the generating latent variables given observations in a high-dimensional space that have been generated by the latent variables by a linear transformation and single-node soft interventional data with interventions on all nodes. Previous work provides identifiability results only in the infinite-sample regime. The algorithms rely on score-based estimators which have been studied in literature. Specifically the correlation matrix of the score differences are shown to be constrained by the graph and the linear transformation matrix. To recover the graph, first a topological order on the variables is estimated. Verifying whether a particular edge exists is checked by a necessary and sufficient condition on the approximate orthogonality of the column space and null space. For the variables recovery, it is sufficient to consider a unit vector in the column space. An existing RKHS-based score estimator is used to obtain the final sample-complexity results.
Strengths: Causal representation learning is an active field of study and theoretical work that provides finite-sample identifiability results is quite relevant in practice and of interest to the community. Overall the paper is well-written with a clear flow of ideas that makes it easy to follow.
I liked that the proposed method has a decoupling between the score estimators and the rest of the algorithms with proofs for the infinite-sample regime separate. This helps us to understand that the difference in quality of finite-sample data versus infinite sample data is the score-estimators.
Weaknesses: A few weaknesses (with the major one being point 1) follow:
1. A major confusion for me was that the setting of the thresholds was not clear. If the true scores are not known, then how is \eta set from \eta* which depends on the unknown true score estimates. Similarly for \gamma that depends on \gamma^* that depends on the true transformation matrix. What am I missing?
2. Overall, the paper has multiple instances of notations not defined which make the paper difficult to parse. For example, Definition 2 has P_I, C_{pa}'s diagonal entries undefined? Lemma 2 has the pseudoinverse subscript undefined.
3. I also think that the experimental section needs basic details. One suggestion would be to cut out the final RKHS sample complexity bounds that are just plugging (29) into Theorem 1 and 2.
4. The proofs in the paper first derive results for the infinite-sample regime where score estimates are error-free. Finite-sample guarantees are then obtained by bounding the error in the score estimates. It would make the paper stronger if the infinite-sample regime results are compared with existing work.
5. In Figure 1b, why are there spikes in the recovery rate even for worse score estimate MSE? Does this go against the main message of the paper that worse score estimates should imply poorer graph recovery and latent variable estimation?
6. What is the reason behind defining the variables recovery as it is defined in Definition 2. Is there a previous instance of such a definition because this doesn't seem standard. Any rationale would help the reader.
Overall, I think the paper is a solid contribution if these questions are clarified. But my current score can't improve given these questions.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to the weakness section for major issues. A few miscellanous points:
1) I haven't checked the details but it struck me as weird to notice that there is a specific value of \gamma in (20) that verifies whether an edge exists or not. Is there any intuition for this?
2) Typos - (22) M \subseteq [n], Line 179.
3) In Figure 1b, is there a reason why even for better score estimates, there is no point for the 10 nodes case?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Implicitly addressed in the conclusions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for a thorough review and thoughtful questions. We hope our answers address the main concerns of the reviewer.
**Setting the thresholds.** We are grateful for the reviewer bringing this up. We recognize that our choices of thresholds could have been presented differently and in a significantly more general way. As the reviewer correctly points out, the current theorem statements are given for specific choice of thresholds that require the knowledge of unknown model parameters. However, this is **not necessary** for our analysis – it suffices to choose $\eta\in (0,\eta^*)$ and $\gamma\in(0,\gamma^*)$, which results in similar sample complexity bounds that depend on the thresholds $\eta$ and $\gamma$ directly. We will modify our theorem statements to make them clear. Let us explain why this choice suffices.
- We prove our sample complexity results by deriving sufficient upper bounds on the score estimation noise for our thresholded approximate tests to succeed. We use two tests: a) An approximate rank test with soft threshold $\eta$ that uses Weyl’s inequality for the analysis and b) An approximate subspace orthogonality test with soft threshold $\gamma$ that uses Davis Kahan (symmetric) sin $\Theta$ theorem for analysis.
- In both cases, the approximate test uses a metric $\chi$ to distinguish between two models, one where the noise-free metric value is $\chi = 0$ vs the one where it is at least $\chi\geq\eta^*$ (or $\gamma^*$). In order to guarantee the correct output using a threshold $\eta$ (or $\gamma$), we must ensure that maximal error $\Delta$ for the metric $\chi$ satisfies $\Delta\leq\min\\{\eta,\eta^*-\eta\\}$ (similarly for $\gamma$). Sample complexity results directly follow from these upper bounds on $\Delta$.
- For implementation purposes, this means that we can select $\eta$ and $\gamma$ arbitrarily small. We highlight that requiring hyperparameters (i.e., $\eta$ and $\gamma$) to be restricted in an interval determined by unknown model parameters (i.e., $\eta^*$ and $\gamma^*$ in our algorithm) is standard (and generally inevitable) in the analysis of regularized algorithms in high dimensional statistics for even simpler problems like sparse support recovery. A common example is the error bound for regularized lasso problem in [HTW, Theorem 11.1], which requires the regularization parameter $\lambda_N$ to be greater than $2\\|{\bf X}^\top{\bf w}\\|_{\infty}/N$ – which depends on the _noise realizations_ $\bf w$.
[HTW] T.Hastie, R.Tibshirani, and M.Wainwright. Statistical learning with sparsity. _Monographs on statistics and applied probability_, 143, 2015.
**Experiment details.** The experimental details are currently provided in Appendix G. We will add those details to the main paper by using the additional page that we can have in the final version.
We also note that Theorems 3 and 4 are essential in the main body. It’s correct that the main results are Theorems 1 and 2, yet they are implicit in the sample complexity of a score estimator. We believe that having Theorems 3 and 4 explicitly is essential for accessibility.
**Experiment results.** Each data point in Figure 1b corresponds to a specific $(N,n,d)$ tuple (listed in Appendix G). Importantly, for each $(N, n)$, the current plot includes two $d$ values: $n$ and 15. In x axis of Figure 1b, we erroneously plotted the MSE divided by $d$, which resulted in a plot that is the union of two shifted monotonic graphs for each $n$. We have corrected this error in Figure 1 of the PDF attached to the global response and presented the results in two figures accordingly. The new plot shows the desired monotonic behavior without major performance spikes.
In the same new plot, we also added a data point for a larger number of samples to show that a good graph recovery rate is possible for latent dimension $n=10$.
**Infinite sample results.** Thanks for the suggestion. We will add the following theorem (and its latent variables recovery counterpart), along with the attendant discussion, to the main paper.
_**Theorem (Infinite samples – Graph).**
In the infinite sample regime, under Assumption 1, for any $\eta\in(0,\eta^*)$ and $\gamma\in(0,\gamma^*)$, the collective output $\hat{\mathcal{G}}$ of Algorithms 1 and 2 is the graph isomorphism of the transitive closure of the true latent graph $\mathcal{G}$._
**Latent variable recovery definition.** In the interventional CRL literature under linear transforms, two types of results exist. First, the so-called “perfect recovery” in which the latent variables are recovered up to permutations and scaling, which is defined exactly in the manner of our Definition 2 with ${\bf C}\_{\rm err} = 0$ and ${\bf C}\_{\rm pa}$ being a diagonal matrix, e.g., Definition 4 in [4]; Theorem 5.3 in [5]; Definition 1 in [9]. Secondly, the recovered latent variables are linear functions of some other variables as well. This recovery class is natural since Theorem 6 of [11] shows that element-wise latent variable identifiability is _impossible_ under soft interventions. Then, the various identifiability classes are defined similarly to Definition 2, e.g., Definition 5 in [4]. In these definitions, off-diagonal support of the recovery matrices (e.g., ${\bf C}\_{\rm pa}$) denotes the mixing with other random variables that vary under different results, e.g., we consider “mixing up to parents” whereas [3] and [4] consider up to “mixing with ancestors”.
**Notations and typos.** We thank the reviewer for pointing out several issues regarding notations and typos. ${\bf P}_I$ is the permutation matrix corresponding to $I$, the intervention order. We will carefully fix the other under-explained notations and correct the typos in the revised paper.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: The response to the reviewer below addresses my concern about the setting of thresholds. It seems like there still is a need to estimate the true \eta^* and \gamma^* practically. I am satisfied with the other responses and have increased my score accordingly. | Summary: This paper provides finite sample analysis for causal representation learning with general latent causal models, single-node soft interventions, and linear mixing functions. Sample complexity for identifying graphs up to ancestors and identifying latent variables up to mixing with parent variables have been studied.
Strengths: 1. To the best of my knowledge, this is the first finite sample analysis for interventional CRL, which is a very interesting problem with high practical value.
2. All theoretical results including definitions are organized clearly.
Weaknesses: 1. The considered CRL setting is a little bit restrictive compared to the recent ones. Specifically, the assumption of linear mixing function may restrict its usage in a lot of real-world scenarios. Meanwhile, the indeterminacy is non-trivial, and further process/disentanglement might still be needed. However, it is understandable since sample complexity might be much more difficult to get for more general models.
2. Perhaps it would be better if there was more discussion on the assumptions.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How to make sure that all the imperfections in the experiments are due to score estimates?
2. Are there any intuitive ideas to extend the range of the finite sample analysis to more general settings in CRL?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for finding our results interesting and the thoughtful questions. We address the questions as follows.
- **Source of error in experiments.**
To assess the source of errors in experiments, we have the following analysis: In Appendix B, we establish the identifiability guarantees under the infinite-sample (i.e., perfect scores) regime. To verify this, we perform experiments using ground truth score functions. In these experiments, the graph estimation results were perfectly accurate in 100 runs, which implies that the graph errors we report in the paper are due to score estimation. We will add these perfect score results to the paper. To replicate these experiments with ground truth score functions, one can set the `estimate_score_fn` flag in `finite_sample_test.py` file line 57 to `False` and run the file.
- **Intuitions for finite sample analysis for general CRL.** In this paper, we have shown that some critical decision problems relevant to CRL – such as edge detection – can be done
via soft decisions in a way that is amenable to error analysis. While the specific tools used (e.g., subspace estimation) were limited to CRL under linear transformations, we believe that the overarching approach of reducing CRL to soft decisions and then obtaining finite sample guarantees of each such step is a good starting point for designing and analyzing more general CRL algorithms.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I will maintain my positive rating. | Rebuttal 1:
Rebuttal: In the attached PDF, we provide a figure to update Figure 1b of the submitted paper with the updated experiment results. Specifically, there are two changes:
1) Each data point in old Figure 1b corresponds to a specific tuple $(N,n,d)$ (listed in Appendix G). For each $(N,n)$, the old plot has two points of $d$: $d=n$ and $d=15$. In the new figure, we present the two cases of $d$ in two figures for a clear presentation. The new plots show a monotonic behavior without major spikes.
2) In the new plots, we also added a data point for a larger number of samples to show that a good graph recovery rate is possible for latent dimension $n=10$.
Pdf: /pdf/787f74708a4542172f37f2a620bceccb21fb1d1c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SPO: Sequential Monte Carlo Policy Optimisation | Accept (poster) | Summary: This paper maximizes the expected reward wrt the policy by rewriting it as a log marginal likelihood in a probabilistic model where latents comprise of states, actions, and “optimality” observations.
The prior over states is the world model and the prior over actions is the policy which being optimized. The likelihood is the exponentiated reward which can be normalized to make it a distribution over auxiliary “optimality” variables.
The marginal likelihood can be optimized via EM where in the E step, we maximize the evidence lower bound (ELBO) in (3) with respect to the variational distribution q and in the M step, we fix q and maximize the objective with respect to the policy π.
In the E-step (Sec. 4.2), the paper derives an analytic expression for q via the Lagrange multipliers method (eq. 6).
In the M-step (Sec. 4.3), the paper uses this q to maximize the objective with respect to π. The key difficulty is that the q derived in the E-step is only known up to a normalizing constant and so the paper proposes to get approximate samples from q using SMC (Sec. 4.2.1).
The experiments compare the proposed method to baselines on several tasks with discrete and continuous action spaces, and show wins. There are ablations showing the tradeoff between using more particles vs unrolling for longer.
In addition to performance, the paper argues that SPO is preferable to the popular MCTS method because it parallelizes more easily and performs well across more domains.
Strengths: Combining SMC and RL is interesting and the experiments show strong empirical gains. However, theoretically, I am not fully satisfied with the method itself—or at the very least, I found it difficult to understand the method fully from the description. See questions / comments below.
Therefore, I’m giving this paper a reject in its current form but I am willing to raise the score if the method or its description is improved.
Weaknesses: -
Technical Quality: 2
Clarity: 3
Questions for Authors: Major
- SMC (Sec. 4.2.1): From (6) it seems like there is just one distribution we want to sample from instead of a sequence of them. What’s the sequence of distributions that SMC targets? In other words, in the background section about SMC, it’s unclear what is $p(x)$ and what is $p(x_t | x_{1:t - 1})$; how do terms in (8) relate to this $p$?
- If I reverse-engineer a sequence of targets distribution from (8), it should be $p(a_{1:t}, s_{1:t}) = \prod_{j = 1}^t \mathcal T(s_t | …) \pi(a_t | …) \exp(A^{\bar \pi}(a_t, s_t) / \eta^*)$. That is, the final target distribution is the transition times the policy times the exponent of the *sum* of advantage functions. I think this is incorrect since it doesn’t correspond to the sum of per-step rewards. Something that would make more sense to me is to have the incremental weights contain the exponentiated rewards for all time steps instead of the last one, where it’s the exponentiated advantage function. This is obviously a completely different algorithm.
- I don’t understand why (5) follows from (4). We have an unconstrained optimization of a regularized objective in (4) which is turned into a constrained optimization problem in (5). I also don’t understand how $\alpha$ turns into $\epsilon$.
- Why does (10) rewrite (9) as a constrained objective where it seems like in practice, we’re actually just optimizing (9).
- How is the value function and the Q-function learned? These seem critical for the method but I couldn’t find a description.
- E-step: I agree that it is difficult to optimize with respect to $q$ since it’s the distribution of the expectation and the term inside the expectation, $Q^q$ depends on it (eq. 4). The paper chooses to fix $Q^q$. I would have expected that it is the $q$ in $\mathbb E_q$ that makes optimization difficult since typically we’d have to resort to the reparameterization trick or REINFORCE. If we fix the $q$ in $\mathbb E_q$, optimizing the expectation is simple: just sample from $q$ and directly optimize $Q^q$.
Minor
- $\gamma$ in (3) is not introduced before.
- Should there be a likelihood term in the incremental weight in (2)? Or is $p(x_t | x_{1:t - 1})$ an unnormalized distribution? $p$ is not defined in this paragraph.
- Swapping the prior term $\log p(\theta)$ in (9) to the KL term in (10) is a little odd. Would it simply not work otherwise?
- In App. G.1, why is $\mu_q$ dependent on $q$?
Other relevant work
- [AESMC](https://arxiv.org/abs/1705.10306), [FIVO](https://arxiv.org/abs/1705.09279), [VSMC](https://arxiv.org/abs/1705.11140)
- [DVRL](https://arxiv.org/abs/1806.02426)
- [SIXO](https://arxiv.org/abs/2206.05952), [NAS-X](https://arxiv.org/abs/2308.14864)
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed review which will be used to clarify details of the paper
1. We aim to sample from the target policy over sequences $\tau= (s_0, a_0, s_1, a_1, ..., s_t, a_t, s_{t+1})$ similar to [60]. In the following notation we switch to using $\tau$ from $x$ and give definitions for the density $p$ and proposal $\beta$ at policy iteration $i$.
* $p_i(\tau_{1:t}) \propto \mu(s_0) \prod_{j=0}^t \mathcal{T}(s_{j+1}|s_j,a_j) \pi_i(a_j|s_j, \theta_i) \exp\left(\frac{\bar{A_i}(a_j,s_j)}{\eta^*_i}\right)$
* $p_i(\tau_t|\tau_{1:t-1}) \propto \mathcal{T}(s_{t+1}|s_{t},a_{t}) \pi_i(a_t|s_t, \theta_i) \exp\left(\frac{\bar{A_i}(a_t, s_t)}{\eta_{i}^*}\right)$
* $\beta(\tau_{1:t}) \propto \mu(s_0) \prod_{j=0}^t \mathcal{T_{\text{model}}}(s_{j+1}|s_j,a_j) \pi_i(a_j|s_j, \theta_i)$
* $\beta(\tau_{t}|\tau_{1:t-1}) \propto \mathcal{T_{\text{model}}}(s_{j+1}|s_j,a_j) \pi_i(a_j|s_j, \theta_i)$
The weight update:
$$w_t(\tau_{1:t}) \propto w_{t-1}(\tau_{1:t-1}) \cdot \frac{p(\tau_t|\tau_{1:t-1})}{\beta(\tau_t|\tau_{1:t-1})} = w_{t-1}(\tau_{1:t-1}) \cdot \left(\frac{\mathcal{T}(s_{t+1}|s_{t}, a_{t})}{\mathcal{T_{model}}(s_{t+1}|s_{t}, a_{t})}\right) \cdot \frac{\exp(\bar{A_i}(a_t, s_t)/\eta_{i}^*) \cdot \pi_i(a_t|s_t, \theta_i)}{\pi_i(a_t|s_t, \theta_i)}$$
2. You are correct, the target distribution over sequences utilises the sum of advantages and not explicitly per step rewards. This does account for the sum of per step rewards, as we calculate $A^{\pi}(a_t,s_t) = E_{\pi}[r(a_t,s_t) + \gamma V(s_{t+1})] - V(s_t)$ therefore for $\gamma = 1$, over sequences, we have the following estimate $\sum_{t=0}^h A^{\pi}(a_t,s_t) = \sum_{t=0}^h r(a_t,s_t) + V^{\pi}(s_{h+1}) - V^\pi(s_{0})$,
the sum of per step rewards plus the value of the last state, minus the value of the first state. If the weight only uses the per step rewards, future rewards beyond the planning horizon are not accounted for. This destroys performance, see new results fig 2. Without a baseline, the weight update would be $\prod_{j=0}^t \exp(Q^{\pi}(a_t,s_t)/\eta^*)$, see line 660 for ablation.
3. In optimising 4, the first term $\mathbb{E}_{q(a|s)} \left[Q^q(s,a)\right]$ is dependent on the scale of reward in the environment. This can make it hard to choose $\alpha$, as the first and second terms are on arbitrary scales. As explored in previous work [1,69] we choose to change this from a soft constraint to a hard constraint (line 158). Instead of choosing $\alpha$ we choose $\epsilon$ which is the maximum KL divergence between $q$ and $\pi$ which is practically less sensitive to reward scale.
4. Eq 9 is a supervised learning step, we have choice over the parameterisation of our policy and prior for regularisation. RL algorithms often constrain policies from moving too far from the current policy, in this work we utilise a Gaussian prior around $\theta_i$ optimised from the previous iteration. Therefore, $\theta \sim \mathcal{N}(\mu, \Sigma)$ where $\mu = \theta_i$ and $\Sigma^{-1} = \lambda F(\theta_i)$, where $F$ is the Fisher information matrix. The prior term in (9) can written as $\log p(\theta) = -\lambda(\theta - \theta_i)^T F(\theta_i)^{-1}(\theta - \theta_i) + c$. Note $c$ is a term that do not depend on $\theta$. The first term is the quadratic approximation of the KL divergence [2*], so we can rewrite equation (9) as:
$$
\max_{\pi} \mathbb{E}\_{\mu_q(s)} \left[ \mathbb{E}_{q(a|s)} \left[ \log \pi(a|s, \boldsymbol{\theta}) \right] - \lambda \text{KL} \left( \pi(a|s, \boldsymbol{\theta_i}), \pi(a|s, \boldsymbol{\theta}) \right) \right]
$$
Like the E-step, choosing $\lambda$ can be non-trivial so we convert it into a hard constraint optimisation (eq 10). $\epsilon$ in Eq (10) is different from (5). The constrained M-step (with a Gaussian prior) is not required for SPO, however it adds stability to training.
5. SPO does not require a Q-function and only learns a value function since we leverage observed rewards and state-values from planning to calculate advantages. We learn the value function like PPO, i.e using GAE values. GAE combines multiple N-step td-errors estimates,
$A_\text{GAE}(s_t) = \sum_{l=0}^{N} (\gamma \lambda)^l \delta_{t+l}$
where $\delta_t = r_t + \gamma V(s_{t+1}) - V(s_t)$. Such that $V_{target}(s_t) = V(s_t, \theta) + A_\text{GAE}(s_t)$. These value targets are not constructed during planning but on real rollouts. It is a practical choice as to what method is used to generate targets, we opt for GAE to balance bias and variance.
6. In Eq 4 when optimising with respect to q there is a moving target, each update to q results in a new critic $Q^q$. Our approach aligns with most actor-critic approaches [65] that fix a critic within an iteration of policy optimisation. For the suggestion of optimising the $Q^q$ term and fixing the acting policy $q$, this would be a highly non-trivial optimisation. Since $Q^q$ depends on the transition dynamics and reward function of the environment unless we knew these we would need to perform off-policy evaluation over the space of possible policies $q$, with such evaluation known to be extremely challenging [1*], as behaviour of policies is difficult to estimate using different distributions. We are not aware of RL algorithms that approach optimisation from this perspective.
**Minor**
1. $\gamma$ is introduced in the background as the discount factor however we will add to 3) for clarity
2. $p(x_t| x_{1:t-1})$ is unnormalised, we define p(x) before eq 2) as an arbitrary target distribution
3. See major q4
4. $\mu$ is the distribution over states. Any policy acting in the MDP will induce a different state distribution. We therefore denote $\mu_q$ in Eq 11 as this relies on the policy $q$. We will add this clarification to the manuscipt.
**References**
[1*] Dudık et al "Doubly Robust Policy Evaluation and Learning" ICML 2011
[2*] Schulman et al "Trust region policy optimization" International conference on machine learning PMLR 2015 | Summary: This paper propose SPO: Sequential Monte Carlo Policy Optimisation, a RL algorithm with the Expectation Maximisation (EM) framework for MDP. Experiments on both discrete and continuous environments show that SPO outperforms baselines in terms of sample efficiency and scalability.
Strengths: Quality: The paper clearly introduces the background of Sequential decision-making, Control as inference, Expectation Maximisation and Sequential Monte Carlo.
Significance: The performance improvement of the proposed algorithm is significant.
Weaknesses: Originality: The paper propose SPO, which is the combination of Sequential Monte Carlo (SMC) sampling with the EM framework. The novelty is limited.
Quality: The paper does not discuss what problem the paper aims to tackle. The paper uses SMC to better estimate the target distribution. But the paper does not give any experimental results showing better estimation results. Ablation studies are missing. Theoretical insights of proposition 1 are missing.
Clarity: The paper is hard to follow as it is full of formulas.
Technical Quality: 2
Clarity: 1
Questions for Authors: See the weakness.
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **On Novelty:** Re: "limited novelty'', we respectfully disagree with the reviewer. We highlight three contributions (all included in the manuscript lines 27, 39, 38):
1. This is the first instance of Sequential Monte Carlo being used as a policy improvement operator for RL. To date, MCTS has been the most widely adopted search-based policy improvement operator. The paper presents a robust search algorithm, with a theoretical foundation, resulting in an operator that outperforms competing methods on a range of benchmarks.
2. The introduction of an inherently parallelisable operator is also a strong contribution. We are not aware of any successful search based policy improvement operations that are able to leverage parallelism over search to speed up inference during training. In fact most works note that parallelisable versions of MCTS strongly reduce performance [1,2] and have not successfully implemented parallelisable versions of AlphaZero for policy improvement[3].
3. Introducing a method that requires no algorithmic modifications between continuous and discrete environments while demonstrating strong performance in both settings, should not be considered as limited. The generality of popular RL algorithms is often limited with algorithms such as AlphaZero requiring considerable modifications just to be applied to complex continuous domains. This is an important barrier for the applicability of search based policy
**On Quality/Clarity:**
The manuscript highlights deficiencies with the current SOTA methods (lines 22-25). After introducing SPO (lines 26-36), we explain how SPO tackles each of the highlighted problems. To clarify, we aim to solve the scalability problem in current search based policy improvement methods, such that search can be executed in parallel without performance deterioration observed in previous works [1,2]. This allows for search methods to leverage hardware accelerators to perform search faster reducing train times (lines 38-39). We explain that SPO is widely applicable to discrete and continuous environments without modifications (lines 37-39). Lastly we explain SPO is theoretically backed (line 29) and that the method scales for different levels of planning (line 41-42).
**On Ablations:** "Ablation studies are missing'' This is an incorrect statement from the reviewer. In the original manuscript we provide two important ablations. We ablate the difference between sampling the target policy in (6) and the alternative formulation without a baseline (line 660), important as previous works differ on these objectives [1,69]. We also provide a second ablation that demonstrates how the appropriate temperature for SMC varies significantly between environments (line 649). This backs up our approach of learning the temperature variable, and demonstrates this is an important contribution to reduce hyperparamter tuning. We add a 3rd ablation to elicit insight into how SPO enforces the KL constraint in policy improvement (Figs 1). We show two charts with varying runs for different maximum KL constraints. We see when viewed on the iteration level that SPO is able to accurately target the KL of choice (fig 1 a). Fig 1 b) shows the impact different KL constraints have on training curves.
Re: "the paper does not give experimental results showing better estimation results". In practice we do not have access to the target distribution, therefore it is non-trivial to construct an ablation that directly measures how good methods are at estimating it. However, we add new ablation that leverages Monte Carlo samples to generate an unbiased estimate (given large compute budget). It compares the KL divergence of our SMC policy to a Monte Carlo oracle (Fig 3). We investigate the impact of depth and particle number on the KL divergence to the oracle for SMC on Sokoban. This concludes that scaling particles is particularly important for improving the target estimation, aligning with SMC theory [4], also that leveraging depth improves estimation. Methods like MPO that limit particles to the number of actions and depth 0 are severely limited in their estimation of the target, likewise V-MPO only uses 1 sample albeit to a depth of N. Lastly comparing MPO to SPO there is a performance gap across all environments. Both methods have similar theoretical foundations the difference being how samples are obtained from the target. Therefore any performance differences can be attributed to changes in target estimation.
**On Theoretical insights:** Similarly, the following statement made by the reviewer is untrue
"Theoretical insights of proposition 1 are missing" We provide full theoretical insights for proposition 1 in Appendix section G.3 (line 749). This is referenced in the main manuscript on line 238.
**On Formulas:** Many of the formulas are either existing work, or modifications to existing work. We believe that the provided theoretical justification is in line with the standards of rigour expected at NeurIPS. It is reasonable to expect that experts in the field would feel comfortable with work that provides such level of detail in order to clearly understand the theoretical foundations and methodology. If you think certain formulas could be re-positioned in the paper, with additional insights provided, we welcome specific feedback in order to increase readability.
**References**
[1] Liu, Anji, et al. "Watch the unobserved: A simple approach to parallelizing monte carlo tree search" arXiv preprint (2018)
[2] Segal Richard B. "On the scalability of parallel UCT." International Conference on Computers and Games. Berlin, Heidelberg: Springer Berlin Heidelberg 2010
[3] Seify Arta, and Michael Buro. "Single-agent optimization through policy iteration using monte-carlo tree search." arXiv preprint (2020)
[4] Del Moral et al Branching and interacting particle systems approximations of Feynman-Kac formulae with applications to non-linear filtering Springer Berlin Heidelberg 2000 | Summary: This paper introduces a novel iterative, particle-based approach to sequential Monte Carlo planning by combining expectation maximization with importance sampling. Their approach uses a model to sample real state-action trajectories in a problem domain then, after importance-weighting said transitions, computes a target policy distribution for the agent. This target policy is used for policy improvement. The paper supplies a theoretical derivation of their approach and provides experimental results against baseline algorithms.
Strengths: **Originality**: The paper introduces a novel combination of previously established algorithms to tackle sequential decision making.
**Significance:** The results of the paper demonstrate that their proposed approach is *easily parallelizable*, *per-iteration faster* and *explicitly tackles a shortcoming of a prior approach (i.e., addresses weight degeneracy)*. All these would make their work interesting to the planning community.
**Clarity:** The paper is well written and easy to understand.
**Quality:** The motivation for the work is strong as they study a problem of general interest to the community. The conceptual rationale for their approach is sound and the theoretical build makes sense. The experimental evaluation focussed on the claims the authors made and back up said claims. They not only show performance improvements against commonly used approaches but also improvements on computational costs.
Weaknesses: I think that an evaluation with an imperfect model should have been included rather than left as future work.
Technical Quality: 3
Clarity: 3
Questions for Authors: Line 204: How exactly is $\hat{A}$ computed? It isn't very clear on that.
How much does removing samples based on weight affect exploration? Is this only helpful when you have a perfect model?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: They broach the limitations of their work sufficiently.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive review.
Re: evaluation of SPO with an imperfect model. Our primary goal was to focus on the novel planning aspects of SPO and its use within training. We believe that the results demonstrate that SMC is a valuable planning method in its own right demonstrably outperforming other commonly used planning methods like AlphaZero when the model is known.
However, we acknowledge that incorporating a learned model is a crucial next step for demonstrating practical applicability. Recent advancements in model-based methods, such as DreamerV3 [2] and EfficientZeroV2 [3], offer advanced architectures for modelling the underlying MDP. These methods are policy-learning agnostic and could easily integrate with the SPO algorithm. As long as the world model accurately predicts rewards, SPO should perform effectively. Additionally, Piché et al. (2018) [1] demonstrated the feasibility of using a learned model with a SMC-based search for maximum entropy RL. This prior work supports the potential integration of learned models in our approach, which we plan to explore in future work.
**Questions**
*Q: Line 204: How exactly is $\hat{A}$ computed? It isn't very clear on that.*
A: Practically, at each step in the sequence we estimate the advantage of a single state action pair using a 1-step estimate $\hat{A}(a_t,s_t) = r(a_t,s_t) + \gamma V(s_{t+1}, \theta_i) - V(s_t, \theta_i)$. As we perform search this we accumulate advantages at each step to calculate the overall importance weights of the sequence according to Eq 8. However, if desired, any advantage estimation technique that can be constructed during the search from a sequence of rewards and values. For example, GAE to allow a trade-off between bias and variance.
*Q: How much does removing samples based on weight affect exploration? Is this only helpful when you have a perfect model?*
A: This is a good question. The most important factor is actually the temperature variable which is controlled by the desired KL divergence $\epsilon$ between the target policy $q_i$ generated by search and the current policy $\pi_i$. If the temperature is too low, then during resampling, the particles can collapse onto a single root action which would make the rest of the search pointless. But when the temperature is too high, the search can be too exploratory and not effectively travel down promising paths. This does need to be considered when scaling the search method depth-wise and when choosing the resample period. This wouldn't only be helpful when using a perfect model as ultimately as long as a learned model has accurate reward prediction and dynamics prediction, the exploration/exploitation acts the same. We will amend our manuscript to include this explanation in order to provide greater intuition on the exploration of the SPO search.
Thank you once again for your valuable feedback.
**References**
[1] Piché, Alexandre, et al. "Probabilistic planning with sequential monte carlo methods." International Conference on Learning Representations. 2018.
[2] Hafner, Danijar, et al. "Mastering diverse domains through world models." arXiv preprint arXiv:2301.04104 (2023).
[3] Wang, Shengjie, et al. "EfficientZero V2: Mastering Discrete and Continuous Control with Limited Data." arXiv preprint arXiv:2403.00564 (2024).
---
Rebuttal Comment 1.1:
Title: Acknowledgement of the rebuttal
Comment: I acknowledge that I have read the rebuttal and other reviews and I maintain my score. | null | null | Rebuttal 1:
Rebuttal: In this section we provide additional results/experiments in response to reviewer xUvG and reviewer X2nr.
**Additional Results**
**Figure 1**
While we give in depth details regarding how the KL constraint arises within SPO and the method through with it is enforced (see Appendix G.2). In the original manuscript we did not provide concrete evidence of this KL control aside from showing how important the temperature can be for training (see ablation B.1). Therefore we add an ablation that plots both the performance of various SPO training runs with different values of $\epsilon$ ,on Brax, but also an iteration level mean of the KL divergence between the $\pi_i$ and $\hat{q}$, the current policy and estimate of the target $q$. We expect this value to stay very close to the target, as for each iteration a new temperature value is calculated to directly target this measure. The results show two things, firstly that our method of KL control is very successful and stable. Secondly it shows the importance of KL divergence in convergence in performance.
**Figure 2**
In response to queries regarding the weight update we use in SPO, we choose to add an additional experiment to the ablation in section B.2 . We remove the use of the action independent baseline, but also the use of the bootstrapped value function. This results in the weight update simply according to observed rewards. The provided results in figure 2 show that not taking account of the future rewards in the final state severely impacts performance ontop of the reduction in performance from not utilising the value baseline.
**Figure 3**
In figure 3 we aim to address feedback that we do not clearly show or measure the estimation of the target distribution. We do believe that our performance results are strong evidence for better estimation of the target but agree that some insight into target estimation more directly will help strengthen the conclusions that can be taken away from the paper. Of course in practice the target distribution is unknown. As we do not have perfect advantage estimates. However given a fixed iteration of training, while completely unachievable to perform during training, for the purposes of ablating, we can perform a large scale Monte Carlo Simulation to form unbiased estimates for the advantage function.
In this ablation we perform Monte Carlo estimation of the Q-value using 1280 rollouts to the end of the episode for every state and every action. Advantage estimates can then be formed using the following
$\hat{A}(a_t,s_t) = \hat{Q}(a_t,s_t) - \sum_{a \in A} \pi(a|s_t) \cdot \hat{Q}(a_t,s_t)$.
This allows us to form an estimate of the target distribution for a single state as follows
$q_{i}(a_t|s_t) \propto \pi(a_t|s_t, \theta_i) \exp\left(\frac{\hat{A}(s_t,a_t)}{\eta^*}\right)$.
Using this accurate unbiased estimate we then measure the KL divergence of the Monte Carlo Oracle to SPO which comparatively uses far lower compute. Within this ablation we scale Depth and Number of particles to demonstrate the impact these variables have on estimation. We provide results on Sokoban across 1280 episodes.
The results in Figure 3 demonstrate how important scaling particles is in improving the estimate of the target distribution, with significant drops of KL achieved. We also see that depth contributes to the reduction of the KL divergence. This highlights that methods that estimate the same target but that do not perform planning i.e. using a depth of 0, or limit the number of actions that the distribution is estimated over [1, 69] will have much worse estimation of the target distribution directly impacting the speed of EM training.
Pdf: /pdf/49bf244048f827a0f7db01f282978671c637d467.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Log-concave Sampling from a Convex Body with a Barrier: a Robust and Unified Dikin Walk | Accept (poster) | Summary: The authors propose a sampling framework from a convex body with a barrier.
Strengths: The main strength of the paper lies at the novelty and the significance of the results, especially that of Theorem 1.1.
Weaknesses: See questions below.
Technical Quality: 3
Clarity: 4
Questions for Authors: Page 1:
Can you please motivate the problem a bit better? For a broader CS audience? :)
Page 2:
Can you please give some intuition about the benefit of using the Hessian in the design of your walk?
Page 3:
Can you provide further details about the hardness of constructing the Hessian of the universal barrier?
Page 4:
I do not understand Lines 154 -- 156.
Page 5:
Line 15:
Why is this probability 0.5 \min\{\tau, 1\}?
Page 6:
Lines 196, 202:
Can you please give some intuition about the symmetry and convexity properties? :)
Page 7:
Line 240:
I do not understand this math display.
Can you please elaborate on Equation (1)?
Page 8:
Lines 285 -- 288:
I do not understand this sentence :)
Can you please give some details about TensorSRHT?
Page 9:
Can you please provide a more extensive future work? Thanks!
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thoughtful comments and questions, and we would be happy to answer your questions.
**Q**: Can you motivate the problem a bit better, for a broader CS audience?
**A**: Yes. Sampling is a fundamental problem in computer science and machine learning. Consider any constrained optimization problem where the goal is to optimize a function over a constrained set, to run any kind of first-order or second-order algorithm, one has to first construct a good *initial point*. This usually boils down to sample from the constrained set according to particular distributions, which is the central topic studied in this paper. In recent years due to the surge of differentially private machine learning, one could also implement the *exponential mechanism* using log-concave sampling from convex bodies.
**Q**: Can you give some intuition on the benefits of using Hessian in your walk?
**A**: In short, the walk we are using is a variant of ball walk, which could be succinctly described as at the current point $x$, randomly pick a point $y$ in the ball centered at $x$ with radius $\delta$, and move to $y$ with correct probability. This idea however, does not work so well if the convex body is, say, a long and flat ellipse: we are constrained to pick the ball radius $\delta$ small, as otherwise we could walk outside of the convex body, hence the walk takes many steps to mix. Computing the Hessian counters the issue by instead computing a maximal volume ellipse around $x$: this enables us to sample points with different scales in different directions, and hence enabling our walk to mix faster.
**Q**: Can you provide further details of the hardness of constructing the Hessian of universal barrier?
**A**: Given a $d$-dimensional convex body $K$, the universal barrier is defined as $\log\mathrm{vol}_d (K^\circ(x))$ where $\mathrm{vol}_d$ is the $d$-dimensional volume and $K^\circ(x)$ is the polar set of $K$ with respect to $x$: $K^\circ(x)=\\{ y\in \mathbb{R^d}: y^\top (z-x)\leq 1, \forall z\in K\\}$. The main computational obstacle comes from even implementing the zeroth order oracle for the barrier function: estimating the volume of a high dimensional convex body is hard, no deterministic algorithm could estimate the volume to a factor of $(\frac{d}{\log d})^d$ [BF86], and randomized algorithms are essentially MCMC types that require generating samples from the convex body. See also [NN94] for a more detailed exposition.
**Q**: I do not understand line 154 - 156.
**A**: Line 154 - 156 says that the cost of computing the Hessian of log-barrier for a spectrahedra is $O(dn^\omega+n^2 d^{\omega-1})$, if we consider the regime $n\geq d$, then the first term $dn^\omega$ dominates since it has a larger exponent on $n$. If $n\geq d^{\frac{3\omega-4}{\omega-2}}$ which is roughly $n\geq d^{8.4}$, then our algorithm is faster than the $dn^\omega$ time needed by exact computation.
**Q**: Line 15: why is this probability $0.5 \min\\{\tau, 1 \\}$?
**A**: Because we are employing a lazy walk that only moves to the next point with half probability.
**Q**: Can you give more intuition on the symmetry and convex properties?
**A**: The symmetry property is very helpful when one tries to relate the self-concordance parameter with cross ratio distance. Intuitively, we prove that for all barriers of concern, they are also $\sqrt{\nu}$-symmetry, and we further prove that the cross ratio distance between any two interior points $u$ and $v$ is at least $\frac{1}{\sqrt{\nu d+dL^2R^2}}$, this establishes our mixing time. The convexity of the regularized barrier is crucial when we try to bound the discrepancy between the log determinants of the current point and the point we are trying to move to. Recall that we define the point to move to is $z=x+\Phi(x)^{-1/2} \psi$ where $\psi$ is an i.i.d. Gaussian vector. Given the convexity, we could bound the log determinant as
$\log \det \Phi(z) - \log \det \Phi(x) \geq (z-x)^\top \nabla \log \det \Phi(x)$, and the RHS term is precisely the bounded local norm which we could control.
**Q**: Line 240: I do not understand the math display. Can you elaborate more on Equation (1)?
**A**: The point of the math display of line 240 is to show that the probability could be decomposed into a determinant term and a term depending on the local norm of $x$ and $z$, and our subsequent analysis shows how to bound these two terms. For Equation (1), it is the convex program associated with $\ell_p$ Lewis weights. It involves finding a maximum volume ellipsoid subject to the (soft) constraint that the polytope is contained in the ellipsoid. Here, by ``soft’’, we mean that instead of forcing $a_i^\top M a_i\leq 1$ for all $i$ which corresponds to the $\ell_\infty$ norm, we instead consider an $\ell_p$ norm constraint, which could be written as $\sum_{i=1}^n (a_i^\top Ma_i)^{p/2}\leq d$.
**Q**: Line 285 - 288: I do not understand this sentence.
**A**: This sentence compares our sampling algorithm with the IPM-based algorithms for semidefinite optimization [JKLPS20, HJSTZ22]. In those algorithms, the Hessian matrix could be carefully maintained and controlled, further improving the per iteration cost of the algorithm. For us however, while it might still be possible to develop a similar framework to reduce the per iteration cost, the picture is much less clear, and the strategy we employ is a simple method that does not require any sophisticated maintenance of the Hessian matrix.
---
Rebuttal 2:
Title: Rebuttal (continued)
Comment: **Q**: Can you provide some more details about TensorSRHT?
**A**: TensorSRHT is a type of sketching matrix for the tensor product of vectors. To start off, consider the SRHT matrix, this matrix can be decomposed into $PHD$, where $P$ is a sampling matrix that samples $m$ coordinates, $H$ is a Hadamard matrix and $D$ is a diagonal matrix with diagonal entries being i.i.d. Rademacher random variables. The key here is multiplying $PHD$ with an $n$-dimensional vector takes $O(n\log n+m)$ time because the Hadamard matrix could be applied to vectors with fast Fourier transform. TensorSRHT is a tensorization of this transform: the matrix is defined as $P (HD_1 \otimes HD_2)$ where $\otimes$ is the Kronecker product. Using the mixed product property, this transform could be applied to tensor product of vectors without forming the tensor product: $P (HD_1\otimes HD_2)(u\otimes v)=P(HD_1 u \otimes HD_2 v)$, in time $O(n\log n+m)$ instead of $O(n^2)$. See [AKK+20] for more details.
**Q**: Can you provide a more extensive future work?
**A**: For sure. To further elaborate, we believe there are several important future directions:
* Removing or mitigating the dependence on $R$: one significant advantage of Dikin walk for uniform sampling is that the mixing time does not depend on $R$, in contrast to hit-and-run. It will be important to develop a walk for log-concave sampling that does not depend on $R$. We note that [KV24] shows that if the function $f$ is in addition exhibiting a strongly-convex property, then it’s possible to remove the $R$. It will be interesting to see whether without strong convexity, this is still possible.
* Improving per iteration cost. Many recent breakthroughs in convex optimization based on the interior point method leverage the fact that the Hessian matrix changes relatively slowly, hence it’s possible to develop linear algebraic data structures to maintain the Hessian matrix and reduce the per iteration cost. Note that our Markov chain also takes a significant number of steps to mix, and the key results we proved in the work shows that the Hessian changes relatively slowly. It will be interesting to design a linear algebraic data structure for maintaining the Hessian matrix and further accelerate the algorithm.
* Better mixing for spectrahedron. It is known for spectrahedron, the hybrid barrier has self-concordance parameter $\sqrt{nd}$ [Ans00] in contrast to the $n$ of log barrier. It would be interesting to verify the three conditions on the hybrid barrier and obtain a walk mix faster than ours in the current work.
**References**:
[BF86] Imre Barany, Zoltan Furedi. Computing the volume is difficult. STOC 1986.
[NN94] Yurii Nesterov and Arkadii Nemirovskii. Interior-point polynomial algorithms in convex programming. SIAM 1994.
[JKLPS20] Haotian Jiang, Tarun Kathuria, Yin Tat Lee, Swati Padmanabhan, and Zhao Song. A faster interior point method for semidefinite programming. FOCS 2020.
[HJSTZ22] Baihe Huang, Shunhua Jiang, Zhao Song, Runzhou Tao, and Ruizhe Zhang. Solving SDP faster: a robust IPM framework and efficient implementation. FOCS 2022.
[AKK+20] Thomas D Ahle, Michael Kapralov, Jakob BT Knudsen, Rasmus Pagh, Ameya Velingker, David Woodruff and Amir Zandieh. Oblivious sketching of high-degree polynomial kernels. SODA 2020.
[Ans00] Kurt M Anstreicher. The volumetric barrier for semidefinite programming. MOR 2000.
[KV24] Yunbum Kook and Santosh Vempala. Gaussian Cooling and Dikin Walks: The Interior-Point Method for Logconcave Sampling. COLT 2024.
---
Rebuttal 3:
Comment: Thank you! :) | Summary: The paper gives a Markov chain Monte Carlo algorithm for estimating the probability of a high dimensional polytope under a log-concave probability distribution. The algorithm improves the mixing time of previous results, while maintaining the best known per-iteration cost. More specifically, in conjunction with a known self-concordant barrier function, they use a cheap spectral approximation of the Hessian matrix required in an iteration, and they modify the Markov chain to converge more quickly despite the approximation. A second result applies similar ideas to spectrahedrons, and there the improvement over past results is both the asymptotic number of iterations and the asymptotic computational complexity of each iteration.
Strengths: This is an important and fundamental problem that has been researched extensively recently, and the result improves upon the best known algorithms.
Weaknesses: The improvement is somewhat incremental, but nonetheless significant.
Technical Quality: 4
Clarity: 4
Questions for Authors: None.
Confidence: 1
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your review. We respectfully disagree with your point that the improvement we obtain is incremental, as the prior best algorithms for $d$-dimensional polytopes with $n$ hyperplanes mixes in $nd$ steps, while ours mixes in $d^2$ steps. For the regime where $n\gg d$, this is a huge improvement, and it is in fact the best known mixing time for uniform sampling over polytopes. Moreover, our framework extends to general convex bodies with well-behaved self-concordant barrier functions. Prior works [MV23] could only handle polytopes.
**Reference**
[MV23] Oren Mangoubi, Nisheeth K. Vishnoi. Sampling from Structured Log-Concave Distributions via a Soft-Threshold Dikin Walk. NeurIPS 2023.
---
Rebuttal Comment 1.1:
Title: Reply to rebuttal
Comment: Thank you for your response. I did say that the improvement is significant. I agree that the improvements in the mixing time, and in the scope of application are significant.
---
Reply to Comment 1.1.1:
Comment: We appreciate your clarification and thank you again for your valuable review! | Summary: In this paper, the authors give improved mixing times for a Markov chain whose goal is to approximately sample from distributions whose densities are proportional to $\exp(-f(x))$, where $f$ is $L$-Lipschitz and convex, restricted to a convex set defined by the intersection of $n$ hyperplanes that lies in a Euclidean ball of radius $R$. The Markov chain itself is an approximate variant of the Dikin walk that tolerates using spectral approximations to the Hessian instead of the exact Hessian for drawing the next sample.
The main result is as follows. Given a subroutine to $1/d$-spectrally approximate the Hessian of the barrier that runs in time $C_g$ and given that the barrier itself is $\nu$-self concordant, the authors give a Markov chain whose iteration complexity to converge to a $\delta$-TV-approximate distribution to the target is $\widetilde{O}(\nu d + d L^2 R^2)\log(1/\delta)$. Each step has runtime $C_g + d^{\omega}$.
This general framework recovers some of the best known mixing time bounds for special cases of the studied problem, including sampling from the uniform distribution over a convex body. Moreover, for the setting considered, as far as I can tell, this is the first guarantee whose iteration complexity does not depend on the number of constraints defining the polytope (this seems to get replaced by the self-concordance parameter of the barrier, which can be made to be $\widetilde{O}(d)$ using more sophisticated barriers than the log-barrier).
The ideas also transfer to giving mixing times for the problem of sampling uniformly from the constraint set of a covering SDP. Again, a key part of the contribution is that the walk can tolerate using spectral approximations to the Hessian of the barrier (in this case, the log det barrier) instead of requiring it exactly.
The main technical insight is that there are a few properties that one needs the barrier to satisfy in order to give the $O(\nu d)$ dependence in the mixing time. They consist of a symmetry assumption, a bounded local norm condition, and that the barrier is convex under regularizing it with an identity matrix (see page 6). Under these assumptions, the authors prove that their variant of the Dikin walk satisfies the promised mixing time (the argument is executed in Appendices B, C, D).
Strengths: The problem addressed in this work is an important one to advance our understanding of the geometry of polytopes. The conditions under which mixing times can be obtained are pretty general, and as the authors show, only depend on certain natural properties of the barrier function that is being used. The results also imply significant quantitative improvements over prior work (in particular the result about uniformly sampling from polytopes with an iteration complexity that is independent of the number of constraints when $n$ is much larger than $d$).
Weaknesses: There are a few questions I have (see below).
Besides that, a very minor comment -- in Section 2.3, the Lewis weight optimization program as written is not actually convex. But, there is a convex formulation for all $p > 0$ given in [LS19].
Technical Quality: 4
Clarity: 3
Questions for Authors: Can one obtain better mixing times under more structural assumptions on the convex set (e.g. symmetry, some notion of uniform convexity, etc)? Or is this an open question?
Do you think it is possible to get runtime improvements by intelligently recycling Hessian computations? Then, the amortized runtime of finding each approximate Hessian could improve over the stated $C_g + d^{\omega}$. In particular, self concordance implies some kind of Hessian stability (for any two nearby points, the Hessians at those points are spectrally close) -- so, could you hope that the Hessians of the barrier aren't changing too much between two consecutive steps?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate your thoughtful and positive feedback for our work. We thank you for pointing out the convexity of the Lewis weights program for all $p>0$ as indicated in [LS19], and this is particularly the case for our application when $p$ is $\mathrm{poly}\log n$. For your questions,
**Q**: Could one hope to obtain a better mixing time bound on more structural assumptions on the convex sets?
**A**: To the best of our knowledge, the only work that explores this frontier is due to Kook and Vempala [KV24], where they showed that if $f$ satisfies the so-called relatively strongly-convex property, then the mixing time could be independent of $R$.
**Q**: Do you think it’s possible to get further runtime improvements by smartly reusing Hessian computations?
**A**: Yes, we do believe that it’s possible to further reduce per iteration cost by developing a linear algebraic data structure to maintain the inverse Hessian matrix. Similar to many IPM-based algorithms for optimization [CLS19, LSZ19, JSWZ21], since the algorithm takes many steps, the relative change between iterations is small, hence it’s possible to design data structures to amortize the cost. We believe this is an important next step for this line of research, as our work has essentially obtained a good (up to dependence on $L$ and $R$) mixing time.
**References**:
[LS19] Yin Tat Lee and Aaron Sidford. Solving Linear Programs with Sqrt(rank) Linear System Solves. 2019.
[KV24] Yunbum Kook and Santosh Vempala. Gaussian Cooling and Dikin Walks: The Interior-Point Method for Logconcave Sampling. COLT 2024.
[CLS19] Michael Cohen, Yin Tat Lee and Zhao Song. Solving Linear Programs in the Current Matrix Multiplication Time. STOC 2019.
[LSZ19] Yin Tat Lee, Zhao Song and Quiyi Zhang. Solving Empirical Risk Minimization in the Current Matrix Multiplication Time. COLT 2019.
[JSWZ21] Shunhua Jiang, Zhao Song, Omri Weinstein and Hengjie Zhang. Faster Dynamic Matrix Inverse for Faster LPs. STOC 2021.
---
Rebuttal Comment 1.1:
Title: response to author response to review
Comment: Thanks so much for the detailed response :) | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
TinyTTA: Efficient Test-time Adaptation via Early-exit Ensembles on Edge Devices | Accept (poster) | Summary: The paper "TinyTTA: Efficient Test-time Adaptation via Early-exit Ensembles on Edge Devices" presents a combination of test-time Adaptation of pre-trained models and early-exist strategies for the efficient inference of Deep Learning models on edge devices. The authors introduce the ideas of test-time Adaptation (i.e., some parameters of the model are changed during deployment by, e.g., entropy minimization) and model splitting by introducing "early exists" nodes in the model. Most notably, the authors introduce a novel weight normalization that does not rely on batch statistics (e.g. BatchNorm) but re-weights and re-scales the weights of the model directly. Finally, the authors briefly present their novel inference engine, TinaTTA, which leverages the author's ideas into a usable framework that is used for the experiments. The authors showcase some impressive results on established image datasets (CIFAR10C, CIFAR100C, OfficeHome, PACS) and known small devices (Raspberry et al. two and a dual-core embedded Arduino). In particular, the present method has low latency and low energy consumption while often offering a much better accuracy than existing methods.
Strengths: - The overall idea of bringing test time adaptation through early exists too small devices is good
- The TinyTTA engine and the experimental evaluation seem impressive and well done
Weaknesses: Overall, this is a very engineering-focused paper without much methodological impact. Therefore, it can be questioned whether it fits the NeurIPs conference. More specifically:
- The authors propose self-ensembling, which basically means splitting the network into smaller submodules. The idea is very straightforward and not really studied in depth in the paper. It is unclear how to split the network (the authors mention using activation as a guiding tool, but it is unclear how they do it exactly). Similar ideas have already been discussed in the literature (see next comment), which are not mentioned in the paper.
- The authors propose early-exists, which is not a novel technique but is well-known in the literature already [1,2,3]. As far as I remember, this technique has already been used since Yolo v5 (although mainly for improved training, and this might be debatable) [4]. Unfortunately, the Related Work section focuses on test-time adaptation (Which is good. I learned a lot here, thanks!) but misses out on the related work on the early-exit networks. Hence, it is unclear to me to what extent self-ensembling and early-exists is really new or just a re-brand of existing methods
- While the evaluation is generally good, it does not highlight the memory overhead of introducing early exists. At the very least, the authors have to introduce some classification heads for each exit, which requires additional parameters. I could not find a mention of this in the paper, but maybe I missed it (see my question).
- Minor issues with the paper
- Citations can be improved: There are a few arxiv papers that have been published already. I suggest using dblp for high-quality references (Note: There might be more than these three; I stopped checking after three):
- Tent: Fully Test-time Adaptation by Entropy Minimization is an ICLR 2021 paper I think
- Towards Stable Test-Time Adaptation in Dynamic Wild World is an ICLR 2023 paper I think
- RobustBench: a standardized adversarial robustness benchmark is a NeurIPs 2021 Paper I think
- The authors mention data distributional shifts multiple times in the paper. I suggest to clarify this a bit more, since it is unclear to me if we look at shift in the label space or data space.
- In section 3.1, the sentence "[...] and approximating each submodule with the full model's capabilities" is unclear. See my question.
- In section 3.1, the merging of subsequent layers into submodules is not really clear. See my question.
- The authors mention that "activations consume much more memory compared to weights", which is also unclear to me. See my question.
[1] Why should we add early exits to neural networks? by Scardapane etal. 2020 https://arxiv.org/abs/2004.12814
[2] BranchyNet: Fast Inference via Early Exiting from Deep Neural Networks by Teerapittayanon etal. 2017 https://arxiv.org/abs/1709.01686
[3] T-RECX: Tiny-Resource Efficient Convolutional neural networks with early-eXit by Ghanathe and Wilton 2023 https://arxiv.org/pdf/2207.06613
[4] TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-captured Scenarios by Zhu etal. 2021
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) What is the overhead in memory/number of parameters by adding new prediction heads? Did you explore this?
2) How did you analyze the activations for grouping the models into sub-modules? Did you apply some principle here, or was it an ad-hoc grouping?
3) What do you mean when you write, "activations consume much more memory compared to weights"?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors discuss limitations in the evaluation of their method (image data only, only one MCU), but the evaluation is generally well-done and, hence, quite thorough.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for all your positive comments. Please see below our response to the specific weakness and questions.
>Q1: Unclear how to use activation memory to split the network. How did you analyze the activations for grouping the models into sub-modules? Did you apply some principle here, or was it an ad-hoc grouping?
A1: Per-layer memory profiling is based on TinyTL[1], as discussed in lines 141-148 and 253-254. The grouping is post-hoc by visualizing the similar memory usage of adjacent layers as discussed in line 146. We will further elaborate this in Appendix D.
>Q2: Proposed early-exists is not a novel technique but is well-known in the literature. What extent self-ensembling and early-exists is really new or just a re-brand of existing methods?
A2: We acknowledge that early-exit networks have been explored in the literature, primarily for inference efficiency. However, they have never been explored under data distribution shifts and for adaptation purposes. Our work is the first to introduce early exits for on-device test-time adaptation in edge devices, enhancing both inference efficiency and model accuracy with data distributional shifts. Specifically, our approach innovatively integrates weight standardization (WS) within the heads of early exits and designs these heads for subsequent modules. This ensures adaptation relies solely on WS in the heads, maintaining both inference speed and model accuracy with data distributional shifts. Moreover, our approach uniquely combines self-ensembling (frozen after fine-tuning), early exits, and WS normalization. This innovative design significantly improves both inference speed and accuracy. Additionally, our method brings novelty in hardware deployment by introducing the TinyTTA Engine, enabling the first on-device test-time adaptation with unlabeled data.
We will include more references on early-exit networks in the Related Work section and further clarify this novelty in Section 3.2.
>Q3: While the evaluation is generally good, it does not highlight the memory overhead of introducing early exists.
A3: We now conducted experiments to explore the overhead in memory and the number of parameters by adding new prediction heads. The results are below:
| Dataset | Model | 1st Exit Memory (MB) | 1st Exit Params (K) | 2nd Exit Memory (MB) | 2nd Exit Params (K) | 3rd Exit Memory (MB) | 3rd Exit Params (K) |
|---------|-------|----------------------|---------------------|----------------------|---------------------|----------------------|---------------------|
| CIFAR10C | MobileNet | 0.0103 | 2.70 | 0.0323 | 8.46 | 0.0660 | 17.29 |
| CIFAR100C | MobileNet | 0.0436 | 11.43 | 0.0985 | 25.83 | 0.1652 | 43.30 |
| OfficeHome & PACS | MobileNet | 0.0306 | 8.03 | 0.0728 | 19.07 | 0.1266 | 33.19 |
| CIFAR10C | EfficientNet | 0.0198 | 5.19 | 0.1685 | 44.17 | 0.5230 | 137.10 |
| CIFAR100C | EfficientNet | 0.0696 | 18.24 | 0.3336 | 87.46 | 0.7540 | 197.67 |
| OfficeHome & PACS | EfficientNet | 0.0502 | 13.17 | 0.2694 | 70.62 | 0.6642 | 174.11 |
| CIFAR10C | RegNet | 0.0957 | 25.09 | 0.5349 | 140.22 | 0.5349 | 140.22 |
| CIFAR100C | RegNet | 0.1482 | 38.86 | 0.6616 | 173.43 | 0.6616 | 173.43 |
| OfficeHome & PACS | RegNet | 0.1278 | 33.51 | 0.6123 | 160.51 | 0.6123 | 160.51 |
According to the results of the above table, we observe the following:
- Adding new prediction heads results in a minimal memory increase. For instance, MobileNet shows an overhead ranging from 0.01 MB to 0.17 MB (with parameters ranging from 2.7K to 43.30K), which is negligible compared to the 512 MB MPU memory.
- Different models exhibit varying overheads. EfficientNet shows the highest increase in both memory (up to 0.75 MB) and parameters compared to MobileNet and RegNet.
- Overall, compared to the 512 MB MPU memory deployed, the overhead remains relatively small. We will extend this discussion in Appendix D.
>Q4: Citations can be improved: There are a few Arxiv papers that have been published already.
A4: We will thoroughly review our paper and update our citations using DBLP to ensure high-quality references.
>Q5: The authors mention data distributional shifts multiple times in the paper. I suggest to clarify this a bit more, since it is unclear to me if we look at shift in the label space or data space.
A5: The shift occurs in the data space. As illustrated in Fig. 2, for an image classification task with an image as the input, the leftmost cat image shows a small distribution shift, as it still clearly depicts the cat; in this case, the severity is 1. Gradually, in the second and third images, the cat’s image becomes noisier and harder to discern with the human eye, corresponding to severities of 3 and 5, respectively. We will further clarify these distributional shifts in the data space in Section 3.
>Q6: What do you mean when you write, "activations consume much more memory compared to weights"?
A6: Activations consume much more memory than weights in neural networks because the value of activations need to be stored for every layer during backpropagation, especially when using large batch sizes and deep networks. For instance, consider a single CNN layer with an input image (e.g., CIFAR100C) size of 224x224x3 and a batch size of 1. The input activations alone would be around 1 (batch size) * 224 * 224 * 3 (channels) ≈ 1.8 million values, and the output activations after a convolutional layer with 64 filters would be 224 * 224 * 64 (channels/filters)≈ 34.1 million values. In contrast, the convolutional layer weights would only be 64 (channels/filters) * 3 * 3 * 3 (kernel size) ≈1,728 values. This indicates that the sheer volume of activations, especially in layers producing large feature maps, leads to significantly higher memory consumption compared to the relatively small number of weights. We will clarify this in Appendix A.
[1] TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Learning. NeurIPS 2020
---
Rebuttal 2:
Comment: Dear reviewer. Thank you for reading our rebuttal! We believe that our response addresses your raised weaknesses (more justification and ablation study) and questions (more experimental results). If you agree that our response addresses the weaknesses and questions, please consider raising your score. If you have any outstanding concern, please let us know so that we can do our best to address them. | Summary: The paper "TinyTTA: Efficient Test-time Adaptation via Early-exit Ensembles on Edge Devices" presents a framework designed to enable test-time adaptation (TTA) on resource-constrained IoT devices. TinyTTA utilizes a self-ensemble and batch-agnostic early-exit strategy to adapt to data distribution shifts efficiently with smaller batch sizes, thereby reducing memory usage and improving latency. The approach is validated on Raspberry Pi Zero 2W and STM32H747, demonstrating significant improvements in accuracy, memory efficiency.
Strengths: 1. The authors address the challenges of deploying TTA on devices with stringent memory and computational constraints, which is a less explored area
2. use of self-ensemble networks and early-exit strategies for TTA on edge devices allowing for adaptive inference based on confidence levels, optimizing both memory usage and computational efficiency.
3. uses Weight Standardization (WS) as a substitute for traditional normalization layers, specifically tailored for microcontrollers with strict memory limitations.
4. The paper provides a comprehensive evaluation of the framework across multiple datasets and compares its performance with several state-of-the-art methods. The evaluation covers all important metrics like accuracy, memory usage, latency etc.
5. The paper is well-structured and written in a clear way explaining all concepts and methodologies. Key concepts such as self-ensemble networks, early-exit strategies, and weight standardization are explained with sufficient clarity
6. Introduces TinyTTA Engine, a MCU library that enables on device TTA.
Weaknesses: main:
- The paper only presents results on vision data, what about other type of data?
- Implementation Details needed for reproducibility are missing and the code is not provided. E.g, There are not details about how submodules are created for the models utilized in the paper and hyperparameters used during training which hinders the reproducibility.
- While the results of the experiments are effectively demonstrated in the figures, the readability of the figures can be improved. Specifically, Figure 5 is hard to read. The caption of this figure is also not explanatory enough. Consider briefly summarizing the results in the caption.
- The comparison was only limited to other TTA methods. Including non-TTA methods might provide a baseline to better understand the advantages of implementing TTA in resource-constrained environments
- The experiments primarily focus on a specific type of MCU (STM32H747) and one MPU (Raspberry Pi Zero 2W). Is TinyTTA generalizable across other platforms as well?
- Section 3.1. self-ensemble network, second paragraph: "(iii) certain groups of adjacent layers, specifically
145 layers 2-15, 16-28, 29-44, and 45-52, show similar sizes of activations. Based on this analysis, we
146 group layers with similar memory usage into submodules for subsequent early exits to improve
147 memory usage: i.e., layers 0-15 for submodule 1, 16-28 for submodule 2, 29-44 for submodule 3, and
148 45-52 for submodule 4. ".
This is
- very model specific: does this generalize to other models?
- overly details. Please move such details to a table or so.
- Section 3.1. second paragraph and also Figure 3: this is well known and, for example, also stated in the MCUNet paper.
- Details on the early exit are not clear: what layers etc. do the authors use?
- The ablation study is incomplete: needs to also show early exit with and without model updates.
- Table 1 should get another line: "Inference-only with EE".
-The paper does not explore the sensitivity of TinyTTA to various hyperparameters.
-The paper lacks a detailed analysis of adaptation time and energy consumption.
-The paper does not discuss the challenges and trade-offs involved in deploying the framework
minor:
* last paragraph of the introduction: 512 KB of SRAM stated twice
* Section 3.2 Early Exists: "For a given pre-trained model ," -> remove space before comma
* Figure 5: I suggest using the similar color for the same models, e.g.., similar color for MCUNet and MCUNet+TinyTTA
* Figure 6: font size of the figures is too small
* Figure 7: please add units % and KB
Technical Quality: 3
Clarity: 3
Questions for Authors: - Could you possibly add more details about computational overhead of self-ensembles and early exits.
- What is the motivation for using entropy thresholding in early exit?
- While you evaluated TinyTTA on four benchmark corruption datasets, these are based on synthetic noise. - How does TinyTTA perform on datasets with real-world distribution shifts or noise?
- While you mention improvements in energy efficiency, a detailed analysis is not provided. Can you provide a breakdown of energy consumption for different components of TinyTTA, and compare it with baseline methods
-how do the optimal entropy thresholds in Appendix G differ and impact the performance of the system?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors provided a limitations section, and to the best of my understanding, the paper does not have any negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for all your positive comments. Please see below our responses.
>Q1: Other type of data other than vision data?
A1: We tested on the Musan Keywords Spotting audio dataset [2] using a pretrained MicroNets [1] (86% accuracy on Speech Commands V2), which includes 35 speech commands with real-world noises, as below:
|**Method**|**Accuracy**|
|----------|------------|
|No Adaptation|0.53|
|CoTTA|0.21|
|TENT (Finetune)|0.05|
|TENT (Modulating)|0.11|
|EATA|0.07|
|ECoTTA|0.23|
|**TinyTTA (Ours)**|**0.61**|
We observed:
- A 33% performance drop in the pretrained model under distribution shifts.
- TinyTTA achieved the highest accuracy of 0.61 (8% improvement).
>Q2: Implementation Details.
A2: As in line 683, we will provide the source code of TinyTTA Engine. We will further elaborate on Appendix C.
>Q3: Non-TTA method?
Q3:We implemented a new baseline using Test-Time Training (TTT) [3] with self-supervised rotation classification, using SGD (momentum 0.9, lr 1e-5), augmentation size 20 and batch size 1. Memory is tested on a Raspberry Pi Zero 2W.
| **Method** | **Model** | **Accuracy↑** | | | | **Memory (MB)↓** | | | |
|------------|----------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| | | CIFAR10C | CIFAR100C | OfficeHome | PACS | CIFAR10C | CIFAR100C | OfficeHome | PACS |
| TTT | MCUNet | 0.16 | 0.05 | 0.07 | 0.07 | 0.41 | 1.35 | 1.32 | 1.33 |
| | EfficientNet | 0.18 | 0.06 | 0.06 | 0.12 | 12.81 | 37.43 | 37.21 | 37.71 |
| | MobileNet | 0.17 | 0.07 | 0.08 | 0.10 | 11.43 | 36.27 | 36.71 | 36.58 |
| | RegNet | 0.15 | 0.12 | 0.06 | 0.07 | 12.28 | 14.33 | 14.45 | 14.20 |
| TinyTTA (Ours) | MCUNet | **0.64** | **0.52** | **0.58** | **0.64** | **0.2** | **0.73** | **0.71** | **0.72** |
| | EfficientNet | **0.68** | **0.53** | **0.62** | **0.66** | **5.65** | **16.94** | **16.97** | **16.91** |
| | MobileNet | **0.65** | **0.53** | **0.60** | **0.63** | **5.58** | **16.74** | **16.79** | **16.77** |
| | RegNet | **0.64** | **0.51** | **0.54** | **0.57** | **6.13** | **6.25** | **6.28** | **6.22** |
- TinyTTA outperforms TTT by ~50% across four datasets and models.
- TTT requires, on average, double the memory compared to TinyTTA.
>Q4: The experiments focus on specific type of MCU and MPU. Generalizeable to other chips?
A4: Yes, we selected these two platforms as they represent a range of devices. Specifically, since our MCU is based on Core M processors, TinyTTA can be directly deployed on ARM Cortex-M-based MCUs. We also chose one of the smallest Raspberry Pi to ensure that other Pis can also run TinyTTA. Further clarification will be provided in Appendix C.1.
>Q5: Ablation study of early exit w/ and w/o model updates.
A5: The accuracy of the model w/o updates is discussed in Fig. 5. The memory usage (MB) w/o updates is discussed below.
| Model | CIFAR10C | | CIFAR100C | | OfficeHome | | PACS | |
|-------|----------|-----------|------------|-----------|------------|-----------|-------|-----------|
| | w/o | w/ | w/o | w/ | w/o | w/ | w/o | w/ |
| MCUNet | 0.189 | 0.2 | 0.726 | 0.73 | 0.69 | 0.71 | 0.711 | 0.72 |
| EfficientNet | 4.94 | 5.65 | 15.78 | 16.94 | 15.99 | 16.97 | 15.93 | 16.91 |
| MobileNet | 5.47 | 5.58 | 16.43 | 16.74 | 16.56 | 16.79 | 16.54 | 16.76 |
| RegNet | 5.81 | 6.18 | 4.44 | 6.25 | 4.47 | 6.28 | 4.41 | 6.22 |
- The memory usage w/ and w/o adaptation is very similar.
- TinyTTA generally requires limited memory to perform on-device TTA.
We will update Fig. 7 and Section 5.4.
>Q6: Computational overhead of self-ensembles and early exits?
A6: Self-ensembles are conducted offline (line 465). We analysed the overhead in memory and the number of parameters of the early exits:
|Model|Dataset|1st Exit||2nd Exit||3rd Exit||
|---|---|---|---|---|---|---|---|
|Model|Dataset|Memory (MB)|Params (KB)|Memory (MB)|Params (KB)|Memory (MB)|Params (KB)|
|MobileNet|CIFAR10C|0.01|2.70|0.03|8.46|0.07|17.29|
| |CIFAR100C|0.04|11.43|0.10|25.83|0.17|43.30|
| |OfficeHome & PACS|0.03|8.03|0.07|19.07|0.13|33.19|
|EfficientNet|CIFAR10C|0.02|5.19|0.17|44.17|0.52|137.10|
| |CIFAR100C|0.07|18.24|0.33|87.46|0.75|197.67|
| |OfficeHome & PACS|0.05|13.17|0.27|70.62|0.66|174.11|
|RegNet|CIFAR10C|0.10|25.09|0.53|140.22|0.53|140.22|
| |CIFAR100C|0.15|38.86|0.66|173.43|0.66|173.43|
| |OfficeHome & PACS|0.13|33.51|0.61|160.51|0.61|160.51|
Adding new prediction heads results in minimal memory increase, e.g., MobileNet's overhead is 0.01 MB to 0.17 MB, negligible compared to 512 MB MPU memory.
We will extend this in Appendix D.
>Q7: Motivation for using entropy thresholding? Sensitivity of hyperparameters?
A7: Entropy thresholding, as used in many TTA methods like [4], avoids high-entropy, less reliable samples to maintain TTA performance. Hyperparameters, including the entropy threshold, are determined post hoc [3] and discussed in Appendix G. We will add a table of layer exits and other hyperparameters in Appendix D.
>Q8: Energy consumption compared baseline methods?
A8: We now compared the latency and energy consumption on Raspberry Pi zero 2 W MPU using CIFAR10C:
| Method | | CIFAR10C (50000 images)| |
|--------|--------------------------|------------------|--------------|
| | | Latency (seconds) | Energy (Wh) |
| CoTTA | | 312,500 | 173.61 |
| TENT (Finetune) | | 25,500 | 14.17 |
| TENT (Modulating) | | 25,500 | 14.17 |
| EATA | | 12,500 | 6.94 |
| ECoTTA | | 18,850 | 10.47 |
| TinyTTA (Ours) | | 11,000 | 6.11 |
- TinyTTA has an inference time of 0.22 seconds per sample and energy consumption of 0.122 mWh, showing high efficiency.
- TinyTTA outperforms baselines, reducing latency by 12% (1500 seconds) and energy consumption by 12% (0.83 Wh) compared to EATA.
[1] MicroNets. MLSys 2021 \
[2] Importantaug. ICASSP 2022 \
[3] Test-Time Training with Self-Supervision for Generalization under Distribution Shifts. ICML 2020 \
[4] Efficient Test-Time Model Adaptation without Forgetting. ICML 2022
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttal and new results. I do not have any further questions at this point.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer. Thank you for reading our rebuttal! We believe that our response addresses your raised weaknesses (more justification and ablation study) and questions (more experimental results). If you agree that our response addresses the weaknesses and questions, please consider raising your score. If you have any outstanding concern, please let us know so that we can do our best to address them. | Summary: This work presents a test-time adaptation framework for tiny deep neural networks. Specifically, the proposed framework partitions a specific model based on the memory usage of each layer, clusters adjacent layers with similar memory usage into a submodule, and adds an early exit header for each module. To avoid using batch normalization, the authors adopt weight standardization for the early exit header layer. Only the early exit header layer and the corresponding weight standardization parameters are updated during test-time adaptation. The authors also developed an MCU library to support the aforementioned test-time adaptation. The framework can support low-end IoT devices with only 512KB of memory.
Strengths: 1. Real device deployment: It is great to see that the proposed framework can facilitate the deployment of tiny neural networks on low-end IoT devices with only 512 KB of memory.
2. Well-motivated: The analysis of the memory usage of existing test-time adaptation techniques clearly highlights the drawbacks of previous methods, making the idea of partitioning models based on memory usage quite straightforward.
3. Impressive results: The significantly better accuracy versus memory usage compared to baseline test-time adaptation on four different models is very impressive.
Weaknesses: 1. Lack of discussion on design choices: For example, why can't the "fine-tune bias only" technique from TinyTL [28] be used in test-time adaptation? What is its performance compared to only fine-tuning the early exit header proposed in this work? Why is there a "lack of support for normalization layers on MCUs"? Since the authors have developed their own MCU library, why can't the normalization layer be added to the library?
2. Limited experiments: As the authors mentioned in the Conclusion section, this work only targets image data. Thus, it is unclear whether the design in this work can be generalized to other applications. For example, will only updating the early exit header be sufficient for other applications?
3. Lack of details on the TinyTTA Engine: Since the algorithm is not entirely novel (i.e., adding multiple early exit headers and weight standardization are not proposed by the authors, but the authors may be the first to use them in test-time adaptation for tiny models), the TinyTTA Engine itself seems to be the key factor ensuring these techniques work efficiently and effectively on real devices. More details and insights from the implementation of the TinyTTA Engine would be greatly appreciated by the community.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Will you open-source the TinyTTA engine?
2. Is there any real-world case to show that updating the model via backpropagation, instead of simply switching some modes, makes a significant difference for tiny models?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The author discussed the limitations but did not address the potential negative societal impact of their work. This should be fine because, in my opinion, this work does not have any negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for all your positive comments. Please see below our response to the specific weakness and questions.
>Q1: Lack of discussion on design choices: why can't the "fine-tune bias only" technique from TinyTL [28] be used in test-time adaptation? What is its performance compared to only fine-tuning the early exit header proposed in this work?
A1: TinyTL [28] needs labels for incoming samples to adapt the model, whereas TTA typically handles situations where only samples are available without labels. This is more practical in real-world settings, as providing labels is often challenging due to unforeseen noise and environmental factors. We now conducted "fine-tune bias only" via entropy minimization and compared it with adjusting the exits. Specifically, during TTA, only biases are allowed to be updated. The results are below:
|**Method**|**CIFAR10C**|**CIFAR100C**|**OfficeHome**|**PACS**|
|----------|------------|-------------|--------------|--------|
|Bias only-MCUNet|0.15|0.09|0.11|0.07|
|**Exits-MCUNet**|**0.64**|**0.52**|**0.58**|**0.64**|
|Bias only-EfficientNet|0.13|0.15|0.18|0.09|
|**Exits-EfficientNet**|**0.68**|**0.53**|**0.62**|**0.66**|
|Bias only-MobileNet|0.16|0.13|0.15|0.11|
|**Exits-MobileNet**|**0.65**|**0.53**|**0.60**|**0.63**|
|Bias only-RegNet|0.18|0.13|0.19|0.17|
|**Exits-RegNet**|**0.64**|**0.51**|**0.54**|**0.57**|
Based on the experiment, we can observe that: \
(1) Adjusting bias alone could not achieve reliable TTA performance. \
(2) TinyTTA is relatively stable across four datasets and models. \
We consider that the results are related to the characteristics of TTA, which aims to align data distribution shifts by adjusting the mean and variance. Adjusting the bias alone is insufficient to maintain reliable performance across different datasets and conditions. We will incorporate these new results in Appendix E under a new heading: "Comparison with Updating Bias Only.".
>Q2: Why is there a "lack of support for normalization layers on MCUs"? Since the authors have developed their own MCU library, why can't the normalization layer be added to the library?
A2: Normalization layers are designed to work with mini-batches of data. However, due to limited memory resources, MCUs typically only allow a single batch of data. As such, normalization layers can technically be added to libraries for MCUs. In practice, the norm layer and the convolutional layer operations are combined into a single convolutional layer operation in order to save the computation and memory as in [1,2]. We will emphasize this further in the paper at Appendix A “Modulating and Finetune TTA.”
>Q3: Limited experiments: As the authors mentioned in the Conclusion section, this work only targets image data. Thus, it is unclear whether the design in this work can be generalized to other applications. For example, will only updating the early exit header be sufficient for other applications?
A3:We conducted a different real-world distribution-shifted audio data modality experiment using a pretrained model on MicroNets [1], which was trained on Speech Commands V2 with 86% accuracy. This dataset contains 35 keywords such as “yes,” “no,” “forward,” etc. We tested the model on the Musan Keywords Spotting test dataset [2], which includes 35 speech commands under real-world noises such as dial tones, fax machine noises, car idling, thunder, wind, footsteps, rain, and animal noises. The setting aims to adapt the pretrained speech command model to real-world scenarios. TinyTTA parameters are: learning rate (lr) = 1e-5, batch size of 1, the SGD optimizer with a momentum of 0.9, and self-ensemble of early exit layers at [3, 5, 7]. The results are as follows:
|**Method**|**Accuracy**|
|----------|------------|
|No Adaptation|0.53|
|CoTTA|0.21|
|TENT (Finetuning)|0.05|
|TENT (Modulating)|0.11|
|EATA|0.07|
|ECoTTA|0.23|
|TinyTTA (Ours)|0.61|
Based on the experiment, we can observe that: \
(1) The pretrained model could experience a performance drop of 33% in distribution shift settings. \
(2) TinyTTA improved accuracy by 8% over the baseline, showing strong resilience to various noises. \
(3) TinyTTA achieved the highest accuracy of 0.61, significantly outperforming other methods (the highest among the state-of-the-art techniques is CoTTA with 0.23).
>Q4: Lack of details on the TinyTTA Engine: Since the algorithm is not entirely novel.
A4: Our approach uniquely combines self-ensembling (frozen after fine-tuning), early exits, and WS normalization. This innovative design significantly improves both inference speed and accuracy. Additionally, our method brings novelty in hardware deployment by introducing the TinyTTA Engine, enabling the first on-device test-time adaptation with unlabeled data. We will provide more details in Appendix C including details of backpropagation, operators, layerwise update strategy, and dynamic memory allocation.
>Q5: Will you open-source the TinyTTA engine?
A5: Yes. As stated in line 683, The TinyTTA Engine Code will be made fully publicly available upon acceptance.
>Q6: Is there any real-world case to show that updating the model via backpropagation, instead of simply switching some modes, makes a significant difference for tiny models?
A6: In realistic scenarios of TTA, we do not have knowledge of the given target domain. Hence, it is difficult to switch to the right model suitable for the target domain. Additionally, note that our target hardware consists of MCUs with extremely limited storage, typically at most 1 MB. Even if we have knowledge of the target domain for a stream of data, we can only store up to 2-3 tiny models (refer to Table 1 for the memory and storage requirements of a single model). We will clarify this in Appendix A.
[1] Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference. CVPR2018 \
[2] TensorFlow Lite Micro. MLSys 2021.
---
Rebuttal 2:
Comment: Dear reviewer. Thank you for reading our rebuttal! We believe that our response addresses your raised weaknesses (more justification and ablation study) and questions (more experimental results). If you agree that our response addresses the weaknesses and questions, please consider raising your score. If you have any outstanding concern, please let us know so that we can do our best to address them.
---
Rebuttal Comment 2.1:
Title: Thanks for the rebuttal!
Comment: Thank you to the authors for the detailed response and for the efforts in conducting the additional experiments.
I have revised my evaluation.
---
Reply to Comment 2.1.1:
Comment: Dear reviewer. Thank you so much again for your time and effort in thoroughly reviewing our work/rebuttal and your response! We are glad that our response addressed your questions properly. In our final draft, we will update our paper based on your feedback and our rebuttal.
Sincerely, The Authors | Summary: In this work, the authors focus on enabling test time adaption on resource limited edge devices. To achieve that, the authors first to train a self-ensemble network where the sub-networks are partitioned according to memory usage. After that, the authors further adopt WS normalization to improve adaption capacity under batch size as one. In the experiments, the proposed methods show better accuracy with higher efficiency compared to prior test time adaption works.
Strengths: 1. Although self-ensemble learning and WS normalization is not new, it seems to first use on-device test time adaption.
1. It's interesting to see that adopting WS normalization can achieve better accuracy compared to other normalization layers on the setting of batch size as 1.
Weaknesses: 1. The motivation to modulate the pre-trained model according to memory usage is confused.
2. The training process on device is unclear.
3. The paper writing needs be improved.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. The motivation to modulate the pre-trained model according to memory usage is confused. In the Fig.3, it shows that initial layers occupy much higher memory usage than later layers. However, the authors group the first several layers as first submodule that should be always activated during test time adaption on device, leading to less memory reduction. Will it better to avoid pass through some of the initial layers and active more later layers?
2. One main concern is that the training process happened on device is unclear.
(1) The self-ensemble learning has higher training cost. In line 206, the authors mention that "After training, only the submodules and early exits will be deployed on MCUs." It seems that the proposed method first do self-ensemble learning offline which suffers from extra training cost.
(2) Since the work targets on test time adaption where the source data should be unavailable. How does the authors to train the self-ensemble networks?
(3) In line 220, the authors said "The cornerstone of TinyTTA lies in its ability to perform backpropagation on-device". What is the difference compared to [1]? Which part of the model will be trained on device?
[1] On-device training under 256kb memory." NeurIPS 2022
3. The Fig.3 is unclear. From my understanding, modulation aims to partition the entire model into different groups based on activation memory. Why does activation memory change per layer, and there is no weight memory, compared to "fine-tuning per-layer memory usage"?
4. Writing needs to be improved:
(1) In line 97, "practical" should be "impractical"
(2) Line 170 is incomplete
(3) In line 175, the authors said "to entirely omit usage of normalization layers and .... on MCUs". Isn't WS normalization is a normalization layer used on MCUs?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the insightful comments. Please see below our responses.
> Q1: In Fig.3, initial layers occupy much higher memory than later layers. Is it better to avoid pass through some of the initial layers and active more later layers?
A1: Memory usage primarily concerns activations, which store the outputs at each layer. This storage is essential during backpropagation, as computing gradients requires retaining both the outputs and gradient values for each node in each layer. However, we only update the Weight Standardization (WS) layer in the heads at early exits and freeze the submodules after self-ensembling (cf. Fig 2). This process does not require backpropagation to the submodules, thus eliminating the need for activation memory. Consequently, it does not increase memory usage.
Additionally, avoiding the initial layers is not feasible because these layers capture crucial information from the input. Skipping them would result in incomplete information being learned, leading to degraded performance. We will further clarify this in Appendix A, “Modulating and Finetune TTA”.
>Q2: The authors mention "After training, only the submodules and early exits will be deployed on MCUs." It seems that the method first do self-ensemble offline which suffers from extra training cost.
A2: The self-ensemble model is trained offline, so there is no additional training cost on the device. The training procedure is discussed in Appendix C.2. We will include a heading for the first paragraph as 'Pre-training of Self-ensemble' and further clarify this point.
>Q3: Since source data should be unavailable. How does the authors train the self-ensemble networks?
A3: The self-ensemble networks are trained offline, so we still use the source data, the same as related work in the literature [2]. However, once deployed on the device, the adaptation using TinyTTA does not have access to the source data, adhering to the standard TTA pipeline and the same configuration as in [2]. We will clarify this point in Section 3.1.
>Q4: What is the difference compared to [1]?
A4: We discussed [1] TinyEngine in Appendix C.4. Specifically, TinyEngine focuses on on-device training (with labeled data), whereas TinyTTA Engine (ours) focuses on on-device test-time adaptation (with unlabeled data). TinyEngine pre-determines the layers and channels to be updated before deployment statically (i.e., as a binary file), executing these updates at runtime. In comparison, TinyTTA is dynamic during inference towards exits in submodules, enabling the exiting of high-entropy samples for reliable TTA. To enable TinyEngine for TTA, the only viable solution is using TENT [3] to finetune with entropy minimization. To this end, we compared TinyEngine (dubbed as TE) using TENT on a Raspberry Pi Zero 2W, using batch size 1, with TinyTTA (ours) in terms of accuracy. All configurations are the same as in Appendices B and C. The results are below:
|**Method**|**CIFAR10C**|**CIFAR100C**|**OfficeHome**|**PACS**|
|----------|------------|-------------|--------------|--------|
|TE-MCUNet|0.13|0.06|0.07|0.06|
|**TinyTTA (Ours)-MCUNet**|**0.64**|**0.52**|**0.58**|**0.64**|
|TE-EfficientNet|0.19|0.11|0.09|0.07|
|**TinyTTA (Ours)-EfficientNet**|**0.68**|**0.53**|**0.62**|**0.66**|
|TE-MobileNet|0.18|0.05|0.05|0.06|
|**TinyTTA (Ours)-MobileNet**|**0.65**|**0.53**|**0.60**|**0.63**|
|TE-RegNet|0.15|0.12|0.07|0.08|
|**TinyTTA (Ours)-RegNet**|**0.64**|**0.51**|**0.54**|**0.57**|
We can make the following observations:
- Powered by the TinyTTA Engine, TinyTTA generally performs stably as it allows for dynamically exiting high-entropy samples.
- TE is unable to perform stable TTA across all datasets.
We will update the paper in Appendix C.4.
>Q5: Which part of the model will be trained on device?
A5: After deployment, as shown in Fig. 2, only the exits will be updated on-device, while the remaining parts of the model are frozen, ensuring both high TTA accuracy and low memory usage.
>Q6: Why does activation memory change per layer, and no weight memory in Fig 3, compared to "fine-tuning per-layer memory usage"?
A6: The primary memory usage for activations is determined by the size of the last output tensor of each layer, essentially storing each layer’s outputs. Since each layer’s output shape is different, their activation memory will accordingly be different. The weight memory is very small (a few KBs) compared to the activation memory. The weight memory usage for modulating TTA, i.e., the change of two parameters, Scale (γ) and Shift (β), is relatively small and not visible in Fig 3.
Consider a single CNN layer with an input image size of 224x224x3 and a batch size of 1. The input activations alone would be around 1 (batch size) * 224 * 224 * 3 (channels) ≈ 1.8 million values, and the output activations after a convolutional layer with 64 filters would be 224 * 224 * 64 (channels/filters)≈ 34.1 million values. In contrast, the convolutional layer weights would only be 64 (channels) * 3 * 3 * 3 (kernel size) ≈1,728 values. This indicates that the sheer volume of activations, especially in layers producing large feature maps, leads to significantly higher memory consumption compared to the relatively small number of weights. We will clarify this in Appendix A.
> Q7: The authors said "to entirely omit usage of normalization layers ...". Isn't WS normalization a normalization layer?
A7: We discussed how to deploy Weight Standardization (WS) normalization in lines 189-190 and Fig 4. Specifically, WS will be applied within the CNN exit layer (i.e., a new CNN layer which was introduced by TinyTTA Engine during deployment) to avoid using batch normalization layers. We will further clarify this in Section 3.3.
[1] On-device training under 256kb memory." NeurIPS 2022 \
[2] EcoTTA: Memory-Efficient Continual Test-time Adaptation via Self-distilled Regularization, CVPR 2023 \
[3] Tent: Fully test-time adaptation by entropy minimization, ICLR 2021
---
Rebuttal 2:
Comment: Dear reviewer. Thank you for reading our rebuttal! We believe that our response addresses your raised weaknesses (more justification and ablation study) and questions (more experimental results). If you agree that our response addresses the weaknesses and questions, please consider raising your score. If you have any outstanding concern, please let us know so that we can do our best to address them.
---
Rebuttal Comment 2.1:
Comment: Thank you for your thoughtful response. The authors have addressed my concerns, and I raised my score accordingly.
---
Reply to Comment 2.1.1:
Comment: Dear reviewer. Thank you so much again for your time and effort in thoroughly reviewing our work/rebuttal and your response!
We are glad that our response addressed your questions properly. In our final draft, we will update our paper based on your feedback and our rebuttal.
Sincerely,
The Authors | Rebuttal 1:
Rebuttal: Dear reviewers and meta reviewers,
We appreciate all the positive comments of our work:
- Reviewer yW6M: First time using WS and self-ensemble for on-device test time adaptation.
- Reviewer DHsU: Well-motivated, impressive results, and real device deployment on low-end IoT devices with only 512 KB of memory.
- Reviewer 2zcM: Comprehensive evaluation, well-structured and written, and novel TinyTTA Engine.
- Reviewer bEbZ: First TTA for small devices; TinyTTA Engine is impressive and well evaluated.
We have addressed all the comments by providing more clarifications and new results:
- Reviewer yW6M: We have clarified memory usage in initial layers, self-ensemble training, on-device updating components, activation memory, and WS normalization. New experiment: Comparison with on-device training.
- Reviewer DHsU: We have clarified fine-tuning bias only, normalization layers on MCUs, TinyTTA engine, and switching some models instead of TTA. New experiments: real-world different data modality. Comparison with "fine-tune bias only."
- Reviewer 2zcM: We have clarified implementation details and the motivation of entropy thresholding. New experiments: real-world different data modality, new non-TTA baseline, and ablation study of early exits.
- Reviewer bEbZ: We have clarified the principle to group activation, novelty of exits, citations, and activation memory. New experiment: memory of exits.
Detailed Q&As are listed below. We look forward to further discussions and feedback. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MVInpainter: Learning Multi-View Consistent Inpainting to Bridge 2D and 3D Editing | Accept (poster) | Summary: This paper proposes a new method for multi-view consistent inpainting. The input includes a video and a sequence of masks. The first image could be manipulated by any 2D editing approach, while the remaining images need to be inpainted using the proposed method. The method is built upon the SD1.5-inpainting model and fine-tuned with two LoRAs: one for object-centric and one for scene-level, using different datasets. It employs motion priors by fine-tuning the domain-adapted LoRA and temporal transformers from AnimateDiff, pre-trained on a video dataset. The technique of reference key & value concatenation from previous works is used to enhance consistency with the reference image. Flow features are extracted and added to the UNet as additional conditions. During inference, a heuristic strategy is proposed to warp and propagate the mask from the first frame to all subsequent frames.
Strengths: 1. The task of propagating a single 2D edited image to generate a consistent video is interesting.
2. The overall pipeline is well-motivated, built upon the SD1.5-inpainting base model, fine-tuning two LoRAs on different datasets, and including temporal modules, flow features, and reference attention techniques.
3. The proposed method doesn't require camera pose as input and has a short inference time.
4. The paper evaluates the method on several datasets with various tasks and also provides some ablation studies.
5. It's interesting to know that the dense feature is less effective than the slot embeddings.
Weaknesses: 1. While the title contains "to Bridge 2D and 3D Editing," the proposed techniques are primarily designed for 2D images or videos, with no explicit 3D representation used or 3D output generated. It would be interesting to explore whether 3D output can be extracted from the resulting image sequence. Otherwise, it might be better to change the title "3D editing" (and other mentions in the text) to something less misleading.
2. The utilization and fine-tuning of domain-adapted LoRA and temporal transformers from video models are intuitive and straightforward, but there do not appear to be any insightful design choices, limiting the novelty.
3. The technique of "Reference Key & Value Concatenation" has been used in many previous works. We fail to see much novelty here.
4. While the section introducing the techniques is generally clear, the concrete settings and details for training, inference, and evaluation are very hard to follow. I tried my best to understand the setup but still feel confused about many details, which is quite frustrating. The paper needs to be rewritten in a more well-organized way, introducing all necessary settings and details. Specifically, the following points need clarification:
a) During inference, you employ 2D models to generate reference images for different tasks (e.g., object removal/editing/replacement). However, what is the input-output pair during training? For example, do you have specific ad-hoc training pairs for the object removal task?
b) The experiment contains various setups, and the concrete settings are confusing. For each setup, it would be better to explicitly mention the task (e.g., inpainting with object/random masks, object generation, object removal, object replacement), dataset, the input, ground truth output, and how the **mask** are generated. Currently, it only says "Object-Centric Results," "Forward-Facing Results," and "Real-World 3D Scene Editing." But the concrete tasks are unclear. For example, what is the difference between "Object Removal" and "Scene-Level Inpainting"? I assume "object removal" is also achieved by inpainting? If the setup for quantitative and qualitative experiments is different, please also mention it. For example, there are some qualitative results for "object removal," but it should not be evaluated quantitatively (no ground truth)?
5. In equation (1), you concatenated both the masked latents and unmasked noised latents. Some related questions:
a) The 9-channel input differs from the original input of SD1.5-inpainting. How do you achieve that?
b) The noise-free latent is currently masked after VAE encoding. Will this cause leakage? It might be better to mask before VAE encoding.
6. What is the difference between the "Masking strategy" (line 163) and "masking adaptation" (line 228)? Is one for training and one for inference?
7. The method extracts flows before masking. This may be acceptable for the tasks of removal and replacement, but will it cause leakage for the task of general NVS and inpainting, considering the flow model may have seen the ground truth image?
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Have you considered directly fine-tuning a video inpainting model instead of using SD1.5-inpainting?
2. Line 186: The sentence "which always captures the unmasked reference latent without noise (Eq. 1), thus it is unnecessary to re-scale the latent before adding noise from another U-Net" is unclear to me. Could you please clarify?
3. Line 207: The sentence "except that the former should be normalized in the query dimension, while the latter is normalized in the key dimension" is unclear. Could you elaborate?
4. Line 212: The phrase "with comparable performance and fewer trainable weights" needs clarification. Which two setups are being compared?
5. Could you provide more details about the 3D attention mechanism?
6. Line 309: In the context of not having object-level tracking masks, how do you obtain the mask?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes, discussed in supplementary.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We rearrange and classify similar questions together.
**W1. No 3D representation? Tuning down "3D editing" in title.**
Thanks.
First, we would like to clarify that our paper has included a detailed discussion on the explicit 3D representation (3DGS) within our method in Section C of Appendix (Lines 727-762). Additionally, we have provided videos rendered using 3DGS in supplementary.
Second, the primary contribution of our work is to 'bridge' the gap between 2D and 3D editing, rather than focusing exclusively on either 2D or 3D synthesis. This bridging is fundamentally about achieving multi-view consistent inpainting (Lines 45-46), which MVInpainter effectively addresses.
Benefiting from the consistent inpainting, our method enjoys reliable 3D representation (Lines 298-299).
While the key contribution of our work lies in multi-view inpainting, discussion on 3D representation holds a lower priority.
Therefore, we provided them in the Appendix.
We would further clarify this point in revision.
**W2, W3. Limited novelty of LoRA, temporal transformers, and Ref-KV?**
Thanks. While we understand your perspective, we respectfully disagree.
The primary novelty of our paper lies not in the specific model design, but in the innovative task formulation that synergistically leverages video priors, optical flow, and the inpainting formulation to simplify the complex novel view synthesis (NVS) into a multi-view inpainting task (Lines 44-46).
We believe that the novel model design is not the only way to advance AIGC development. An effective new formulation can significantly reduce task complexity (Lines 122-123), thereby also providing significant novelty and insights to the community.
Besides, except for video inpainting modules, we also propose other novel and effective components, including flow grouping and mask adaption.
**Q2. Clarifying for Line 186: "which always captures unmasked ... noise from another U-Net".**
Thanks. Line 186 details the differences between our usage of Ref-KV and previous works [Hu L et al,CVPR2024;Ruoxi Shi et al,2023]. We apologize for the incorrect citation here; AnyDoor[Chen, Xi, et al,CVPR2024] should be AnimateAnyone[Hu L et al,CVPR2024].
Formally, AnimateAnyone used another diffusion U-Net to encode clear reference views (noise-free), while Zero123++ re-scaled the latent inputs by multiplying a constant to the reference latent (5x) to strengthen the effect of the reference view.
In our work, no additional modules or adjustments are necessary to encode the reference latent. This is because the first reference frame in our inpainting model is completely mask-free, providing straightforward guidance when concatenated as the unmasked input (Lines 185-187).
We will provide further clarification on these points and revise the citation.
**W4. a) Setting details of tasks, datasets, and masks.**
Thanks. We have introduced the inference pipeline in Section 3.4 and Figure 3(a) of the main paper. The overall tasks of this paper are clear, i.e., multi-view object removal (MVInpainter-F) and object insertion (NVS by MVInpainter-O), while the replacement is just the combination of these two tasks as in Figure 3(a).
We introduced all input-output pairs for NVS and object removal in Lines 142-156, Figure 2(a) of the paper, which is further detailed in Table 4 of Appendix. NVS is trained on object-centric data (MVInpainter-O), while object removal is trained on face-forwarding data (MVInpainter-F). Masks are discussed in Lines 163-169.
**Ad-hoc pairs for object removal?**
No ad-hoc training pairs are needed for object removal.
Based on the common conclusion of previous inpainting works, training on the scene-level data with random masks is sufficient to learn the object removal ability.
We will further improve the task definition and data setting.
**Q6. How to obtain training masks without object masks?**
Without the object-level tracking masks, we only use hybrid random inpainting masks (Lines 163-164) to train MVInpainter-O.
Note that we have discussed that both MVInpainter-O and MVInpainter-O adopt random inpainting masks, while we additionally employed object masks to MVInpainter-O (Lines 164-165).
**b) Various setups of experiments, Difference between Object Removal and Scene-Level Inpainting.**
Thanks. All sections are used to evaluate MVInpainter-O and MVInpainter-F separately.
Formally, Section 4.1 (Object-Centric Results) is dedicated to evaluating the NVS performance of MVInpainter-O trained on object-centric data; Section 4.2 (Forward-Facing Results) is dedicated to evaluating the performance of MVInpainter-F trained on forward-facing data; Section 4.3 (Real-World 3D Scene Editing) shows the result of the combination of object removal (MVInpainter-F), NVS (MVInpainter-O) and mask adaption (Lines 296-298).
We further divide Section 4.2 into 'Object Removal' and 'Scene-Level Inpainting' to evaluate the abilities to remove objects with object masks (Line 279) and inpaint images with random masks (Line 291). So 'Object Removal' and 'Scene-Level Inpainting' are both tested with MVInpainter-F but with different mask types and test sets.
We also detailed the settings of datasets (Lines 246-254, Lines 675-687) and metrics (Lines 263-267) in the paper.
**c) Setup for quantitative and qualitative experiments, especially for object removal.**
Thanks. All quantitative and qualitative experiments are conducted on the same dataset except for the object removal (Line 279-280), which were discussed in paper.
Quantitative results of object removal are based on the **test set** of SPInNeRF without foregrounds which can be regarded as GT (Line 279), while qualitative results of object removal are evaluated on the **train set** of SPIn-NeRF with foregrounds (no GT, Line 280). More details are discussed in Lines 681-684 of Appendix.
**Limited by the rebuttal context, other concerns from reviewer NjjA are answered in the global rebuttal (at the top of this page).**
---
Rebuttal Comment 1.1:
Title: Feel free to discuss any remaining questions or concerns
Comment: Thanks for your valuable feedback. We have carefully addressed all your concerns and provided the details in our rebuttal. Our models and codes will be open-released, including all details to reproduce our results. As the rebuttal deadline approaches, please feel free to discuss any remaining questions or concerns. We will try our best to answer your questions. | Summary: The paper formulates the 3D object editing task as a multi-view 2D in-painting task. Firstly, MVInpainter-F is employed to remove the object and obtain the background scene. Then, MVInpainter-O generates multi-view images based on the reference view.
Strengths: 1. Solving the 3D object editing task as multi-view generation is interesting and innovative.
2. The thoughtful selection of different training datasets (indoor scene/object-centric) for training MVInpainter-F and MVInpainter-O effectively decomposes the object replacement task.
3. The extensive experiments, including the generation of long sequences and the 3D scene reconstruction using the generated multi-view images, further demonstrate the 3D consistency of the resulting outputs.
Weaknesses: It is uncertain whether the design of mask adaption is robust enough.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. During training, the flow is obtained using RAFT and masked. In the inference stage of MVInpainter-F, is the flow only obtained from the masked image? Does the utilization of this low-quality flow negatively impact the results?
2. How can we ensure that mask adaption remains robust when dealing with different masks and backgrounds?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The author provides a detailed discussion of limitations and broader impacts in the supplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1, Q2. Whether mask adaption is enough robust? How to deal with different masks and backgrounds?**
Thanks for this good point. As long as the 'basic plane' where the object is placed can be approximated to a plane, the proposed mask adaption is robust enough with the theoretical basis of perspective transformation. We have shown the effectiveness of the mask adaption with various object shapes in Figures 12 and 13 of the Appendix.
To further clarify the robustness, we provide more examples with complicated backgrounds in the wild shown in Figure 4 (rebuttal pdf), including textureless table, rough lawn, and pool with sunlight reflection.
We find that the dense matching (RoMa[1]) used in our work enjoys good generalization on various backgrounds. Importantly, most backgrounds can be approximated to the 'basic plane' as claimed in Lines 231-233.
We would add these results to the paper.
[1] Johan Edstedt, et al. RoMa: Robust Dense Feature Matching. CVPR2024.
**Q1. Do low-quality flows obtained from masked images negatively impact the result?**
Thanks. We have discussed this point in the Line 194 (footnote of page 5).
We extract flows before masking because foregrounds largely benefit the flow quality. Note that we process flows with 5-pixel dilated masks to avoid any leakage.
Given that optical flows are typically low-level local features, we have not observed any conflicts when using masked flows as guidance. Furthermore, the quantitative results of dense flow presented in Table 3(b) verify that no leakage occurs (dense flows did not improve the NVS results), ensuring the integrity of quantitative results.
We would clarify about the detailed operation of flow masking (mask dilation) in the revision. | Summary: This paper proposes a new 3D editing method by regarding it as a multi-view 2D inpainting task. It ensures cross-view consistency through video priors and concatenated reference key/value attention and controls camera movement without explicit poses using slot attention. It shows effectiveness in object removal, insertion, and replacement.
Strengths: - The slot-attention-based flow grouping has a novel and reasonable design. As demonstrated in various video/multi-view models, self-attention-based key and value concatenation is effective in ensuring view consistency. Overall, this paper proposes a well-structured recipe for multi-view editing.
- The strategies used in the proposed method and their justifications are well explained in detail.
- The video results and quantitative results demonstrate good performance.
Weaknesses: - Some assumptions are heuristic, such as in L232. While these assumptions may be reasonable in most cases, they might not be robust across diverse user cases.
- Overall, the figures and flow of the paper are complicated. It would be beneficial if the content of the figures were made more comprehensive.
Technical Quality: 2
Clarity: 3
Questions for Authors: How is the generalizability? What is the performance in completely OOD cases?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: the authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1. Heuristic assumption of mask adaption?**
Thanks for this point. We should clarify that the planar assumption of mask adaption in Line 232 is reasonable and straightforward to implement across diverse real-world cases.
First, this strategy roughly decides the mask location rather than providing exact mask shapes, which would be further irregularly masked for better generalization (Line 241-242).
Second, the mask adaptation proves to be robust for the 'bottom face' of targets with various shapes, as verified in Figures 12 and 13 of the Appendix. Almost all objects have explicit or approximate 'bottom faces' which makes this assumption widely applicable.
For the `basic plane', the key ability is the dense matching method.
We find that the dense matching (RoMa [1]) used in our work can be generalized to various wild backgrounds (textureless table, rough lawn, and pool with sunlight reflection) as verified in Figure 4 (rebuttal pdf), denoting the robustness of mask adaption.
[1] Johan Edstedt, et al. RoMa: Robust Dense Feature Matching. CVPR2024.
**W2. Complicated figures and flow? Needing comprehensive content of figures.**
Thanks for this good advice!
Indeed, finishing a 3D editing is an inherently non-trivial task, including sophisticated pipelines as seen in many pioneering works.
In this paper, we simplify the overall pipeline into the following steps:
(1) 2D inpainting → (2) multi-view removal (optional) → (3) multi-view synthesis/insertion → (4) 3D reconstruction (optional).
These steps are illustrated in Figure 3 and detailed in the inference pipeline in Section 3.4.
Our work focuses on steps (2) and (3), which are addressed by MVInpainter-F and MVInpainter-O respectively (Figure 2a).
We agree with the reviewer that providing a more comprehensive overview figure between Figures 2 and 3 could make readers understand our paper better. We design a new overview shown in Figure 5 (rebuttal pdf), which clearly indicates the steps that are the main focus of our paper and identify the models specifically designed for each step.
We will re-organize these points, if accepted.
**Q1. Generalizability for OOD cases.**
Thanks. Benefiting from the T2I prior from StableDiffusion, our method enjoys the capacity to tackle OOD cases, which have been verified in our submission.
Formally, 15 categories of MVImgNet are unseen test sets (Line 678). We further evaluate the OOD capability of MVInpainter in the zero-shot Omni3D (Table 1, Figures 4 and 9 of the paper).
We show some qualitative NVS results of OOD categories in Figure 3 (rebuttal pdf).
Moreover, our method could also be used for unseen toys generated from exemplar-based synthesis[2] as verified in Figure 10.
Considering the difficulty of scene-level NVS, it is challenging to properly address some completely OOD cases, such as human bodies, which are limited by the capacity of SD and AnimateDiff, as well as the restricted in-the-wild multi-view datasets (CO3Dv2+MVImgNet).
Despite these limitations, MVInpainter is, to the best of our knowledge, the most robust reference image-guided method that generalizes well to in-the-wild object NVS across various categories (Lines 253-254).
We would discuss this in the revision, and consider scaling up our model with more diverse training data to develop a foundational model as interesting future work.
[2] Xi Chen, et al. Anydoor: Zero-shot object-level image customization. CVPR2024. | Summary: The paper studies the task of multi-view consistent real-world object removal and insertion, enabled by learning a model, MVInpainter, trained to perform multi-view 2D inpainting. The paper demonstrates its effectiveness on both object-centric and scene-level datasets with the task of object removal and object insertion.
Strengths: The task that the paper addresses is important yet less explored. Learning strong priors for scene level multi-view consistent editing can serve as a basis model for various 3D applications.
The method introduces motion priors and optical flow to guide the generation, which seems to be interesting and effective with various quantitative and qualitative evaluations.
The presentation is good and the paper is overall easy to read.
Weaknesses: 1.Though the paper tackles the setting of video input (consecutive frames to obtain the motion prior), I am curious how the method performs with just multi-view inputs? Does the performance significantly drop? If the model can be applied to those as well, it could potentially lead to bigger impact and be applied to more real-world scenarios.
2.How are the priors different from AnimateDiff and Flow Grouping? Why are they complementary?
3.Could the authors please explain why the proposed method cannot be compared to SPIn-NeRF-type methods? Is it because of camera poses? If so, I am still curious to see how large the gap is? If the method is close to the methods with camera poses, it would make the paper stronger.
4.One limitation is that the method still works on images with a relative simple background, as pictured in the supplementary materials Figure 19.
Technical Quality: 3
Clarity: 3
Questions for Authors: I think the paper is overall of good quality and tackles an important task. Some questions (as listed) could be explained for a better understanding of the method.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1. How does the method perform with unordered multi-view inputs?**
Thanks. Although our work mainly focuses on the sequential multi-view inputs to better leverage the video prior from video components, it can also achieve comparable performance with unordered inputs as exploratively verified in Table1 and Table2 (rebuttal pdf).
Unordered inputs still enjoy competitive FID, DINO, and CLIP scores for object-centric novel view synthesis (NVS) and object removal, indicating good image quality and consistency. The proposed flow grouping and Ref-KV also strengthen the generated direction and appearance for unordered inputs, preserving stable PSNR. However, unordered NVS suffers from some structural distortions caused by irregular viewpoint changes as shown in Figure 1 of the rebuttal pdf. Thus, we would still clarify that using ordered images with video prior is a more efficient way to model the multi-view inpainting (Line 171-173). Maybe further fine-tuning our model with unordered images could alleviate this issue, which is beyond the scope of this paper.
**Table1: Unordered generative results of object removal**
| | PSNR ↑ | LPIPS ↓ | FID ↓ | DINO-S ↑ | DINO-L ↑ |
|-----------------------|--------|---------|-------|----------|----------|
| Ours (ordered) | **28.87** | **0.036** | **7.66** | **0.8972** | **0.5937** |
| Ours (unordered) | 28.57 | 0.039 | 9.76 | 0.8943 | 0.5752 |
**Table2: Unordered generative results of object-centric NVS**
| | PSNR ↑ | LPIPS ↓ | FID ↓ | CLIP ↑ |
|-----------------------|--------|---------|-------|----------|
| Ours (ordered) | **20.25** | **0.185** | 17.56 | **0.8182** |
| Ours (unordered) | 19.29 | 0.233 | **17.37** | 0.8173 |
**W2. How are priors different from AnimateDiff and Flow Grouping? Why are they complementary?**
Thanks. The priors from AnimateDiff and flow grouping serve different but complementary purposes in our method. The video prior from AnimateDiff enhances structure consistency in the generated outputs (Line 307-308 and Figure 6 of the paper). This helps maintain a coherent structure across frames. Flow Grouping improves pose controllability (Line 191-193 and shown in Figure 11 of the Appendix). This component ensures that the synthesized poses are accurate and aligned with the intended viewpoint changes from unmasked regions. For the NVS inpainting, objects are consistently masked, leading to potential ambiguities in some scenes. These ambiguities can cause errors in pose generation, but video models alone struggle to address them (Figure 11). Quantitative results further verify this point (Table 3).
**W3. Why not compare to SPIn-NeRF-type methods? Because of the requirement of camera?**
Thanks for this good point. Our contributions are orthogonal to NeRF editing-based manners (SPIn-NeRF[1]).
MVInpainter focuses on tackling multi-view editing with a feed-forward model, while NeRF editing is devoted to reconstructing instance-level scenes with test-time optimization. Except for the exact camera poses, NeRF editing manners require costly test-time optimization (Line 109-112) for each instance (SPIn-NeRF needs about 1 hour for each scene).
Moreover, NeRF editing manners fail to substitute for our method as evaluated in Figure 2 and Table 3 (rebuttal pdf).
a) NeRF editing starts with inconsistent 2D-inpainting results, which leads to blurred results as shown in Figure 2 (rebuttal pdf), while our method could refer to a high-quality single-view reference without conflicts.
b) Although both methods enjoy good consistency, rendering-based inpainting suffers from color difference when blended with the original images (the last row of Figure 2 (rebuttal pdf)).
As shown in Table 3 (rebuttal pdf), our method is comparable to SPIn-NeRF in consistency (DINO-S, DINO-L) with better image quality (PSNR, LPIPS, FID) and fidelity in their official object removal test set.
Note that our method is orthogonal to instance-level manners and can further boost their performance with much better inpainting initialization.
For example, experiments in Section C of the Appendix show that our method could be easily integrated with 3DGS to achieve consistent 3D outputs.
We would add these discussions to the paper, if accepted.
**Table3: Object removal compared to SPIn-NeRF**
| | PSNR ↑ | LPIPS ↓ | FID ↓ | DINO-S ↑ | DINO-L ↑ |
|-------------|-----------|----------|----------|-------------|-------------|
| Ours | **28.87** | **0.036**| **7.66** | **0.8972** | 0.5937 |
| SPInNeRF| 25.82 | 0.084 | 38.13 | 0.8681 | **0.6350** |
[1] Ashkan Mirzaei, et al. Spin-nerf: Multiview segmentation and perceptual inpainting with neural radiance fields, CVPR2023.
**W4. Limitation of complex background.**
Thanks. It is important to note that all methods have certain constraints. Our approach is currently limited by the capabilities of StableDiffusion and AnimateDiff, making it challenging to inpaint 360° complex backgrounds that are out of reference view.
However, our method is effective enough for most multi-view scene editing. We acknowledge that scaling up our model to a foundational video inpainting model would be interesting future work. | Rebuttal 1:
Rebuttal: We appreciate the valuable comments from all reviewers. We thank the positive comments of 'interesting and effective', 'novel and reasonable design' of slot-attention based flow grouping (jssh, VJEq); 'important yet less explored', 'interesting and innovative' of multi-view generation (n43z, NjjA). We address remained concerns for all reviewers. While our experiments are solid enough, we further provide some supplementary results in the rebuttal PDF to make our claims more convincing. Please refer to the rebuttal PDF for more details and results.
Here we further address remaining concerns from reviewer NjjA.
**W5. Questions about inpainting form (Eq1).
a) 9-channel input differs from SD1.5-inpainting?**
Thanks. Our model is based on the SD1.5-inpainting (Lines 124-125), which originally contains 9-ch input, including noised latent (4-ch), mask (1-ch), and masked latent (4-ch).
**b) Why is noise-free latent masked after VAE encoding?**
Thanks for careful reading. Sorry for the confusing presentation of Eq(1). The masking is done before VAE encoding. Here we just want to denote the masked latent. We would revise Eq(1) in the revision.
**W6. Difference between 'masking strategy' (line 163) and 'masking adaption' (line 228)?**
Thanks. Yes, the masking strategy in Line 163 is introduced for training, while the masking adaptation in Line 228 is introduced for inference with good generalization as verified in Figures 12, 13, and Figure 4 (rebuttal pdf).
**W7. Leakage of flows in NVS?**
Thanks for this good point.
We clarify that the flow used in our work does not leak information for NVS and inpainting:
1) As in the footnote of page 5, our method involves extracting flows from unmasked images and then applying masks to them. To further prevent leakage, we dilate flow masks by 5 pixels.
Given that optical flows are typically low-level local features, it is challenging for them to carry masked clues to unmasked regions. No conflicts are observed when we use masked flows as guidance for removal and insertion.
2) The proposed method leverages flow grouping with slot-attention to extract high-level motion from flow features (Line 200). These features primarily capture rough pose directions in unmasked regions rather than detailed information, making it difficult to leak masked info.
3) Importantly, quantitative results of dense flow injection in Table 3b indicate that simply adding flow features does not improve NVS quality. If there were any potential leakages in these masked flows, NVS results would be rapidly improved, which is not observed.
We will include detailed explanations of flow masking operations, including mask dilation in the revision.
**Q1. Fine-tuning a video inpainting model instead of SD1.5-inpainting?**
Thanks. We agree that fine-tuning a foundational video inpainting model is a promising direction to strengthen our work, which could be regarded as future work.
Unfortunately, to the best of our knowledge, there are currently no open-released video inpainting models with sufficient capacity to address the NVS task mentioned in this paper. Existing video inpainting models fail to tackle our tasks (Lines 82-83).
On the other hand, fine-tuning a foundational video like SVD into a video inpainting model requires substantial computational resources and data, which is beyond the scope of this paper.
However, our approach, which unifies 2D-inpainting and AnimateDiff, proves both efficient and effective with good convergence (Lines 175-176).
Moreover, we believe that once a foundational video inpainting model becomes publicly available, MVInpainter can be seamlessly integrated into this new model, potentially yielding better performance.
**Q2 is answered in the previous rebuttal page.**
**Q3. Why slot-attention is normalized in query dimension?**
Thanks. Line 207 clarifies the difference between slot attention[1] and vanilla cross-attention.
Given $K$ query (slot) features $\mathbf{Q}\in\mathbb{R}^{K\times d}$ and key features $\mathbf{K}\in\mathbb{R}^{HW\times d}$ with different length $K$ and $HW$, we can achieve the attention matrix: $\mathbf{Q}\mathbf{K}^T\in\mathbb{R}^{K\times HW}$.
Following official implementation of slot attention[1], the softmax normalization is applied along the attention matrix over $K$ slot dimension. This ensures that attention coefficients sum to one for each slot query. This is why we mentioned that slot attention (called 'former' of Line 207) normalizes in the query dimension.
In contrast, the vanilla cross-attention (called 'latter' of Line 207) normalizes along the $HW$ key dimension.
According to [1], such normalization in the slot dimension improves the stability. We recommend referring to [1] for more details.
We will clarify this point in the revision.
[1] Object-centric learning with slot attention. NeurIPS2020.
**Q4. Line 212: Which two setups are compared?**
Thanks. We have compared different ways to inject flow grouping into our model in Table 3b.
The way 'cross-attn' (Line 212) performs similar to 'time-emb' (injecting features through trainable AdaNorm before conv and attn), while the latter needs to train more weights for AdaNorm layers. We will revise it to 'ada-norm-emb' for a clearer presentation.
**Q5. More details about 3D attention mechanism.**
Thanks. The 3D attention mechanism refers to 'temporal 3D attention for the flow grouping' in Line 213.
In Figure 2(c), each flow grouping block contains a spatial-attention layer and a temporal-attention layer. This design is similar to the interleaved spatial-temporal attention layers used in video models. Since masked flow features contain rich temporal information, it is intuitive to extend slot-attention into 3D (spatial-temporal) receptive fields, making slot features carry more general information across all sequential views (Line 213-215). Quantitative results (Table 3b) also verified this. We will further clarify this.
**Q6 is answered in the previous rebuttal page.**
Pdf: /pdf/0bacf8064d31a6e639c787f40ef89dca5f4d18e7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MMSite: A Multi-modal Framework for the Identification of Active Sites in Proteins | Accept (poster) | Summary: The manuscript proposes MMSite, a multimodal framework combining amino acid sequences and textual descriptions to predict active sites of proteins at a residue-level. For that, the authors train an attention-based architecture in two stages. In the first stage, the objective is to align sequence and text embeddings. While in the second stage, it fuses both sequence and text embeddings to predict the final active sites. To train the model, the authors curated a dataset named ProTAD with sequence, text, and active site date. Results show that the proposed MMSite framework improves the performance for active site prediction over sequence-based and sequence-structure-based baselines.
Strengths: The main strengths of the paper are as follows:
1. The authors curated a dataset named ProTAD, in which protein sequences, their respective text annotations, and residue-level active site annotations are made available.
2. The authors introduce a two-stage multimodal framework that aligns sequence and textual embeddings, and then fuse these embeddings for active site predictions.
3. Results obtained by authors show that even using a predicted textual context at inference time, the proposed method can enhance the performance for active site prediction over baseline methods with prediction heads trained for the same task. Extensive ablation studies were performed for the proposed methodology.
Weaknesses: The main weaknesses of the paper are as follows:
1. As text descriptions are not available during the inference stage for novel proteins, the proposed methodology relies on an external model to generate the textual context for inference.
2. Information in the manuscript seems to lack sufficient details to replicate the implementation of the proposed architecture in MMSite and training the baseline methods for the active site prediction task.
3. The authors show that their method achieves the best results when splitting the datasets using an identity threshold equal to 10%. When a higher threshold is used, the gap between the proposed method and baseline methods is smaller. The paper would be more strong with deeper reasoning and analysis regarding this gap in performance.
Technical Quality: 2
Clarity: 2
Questions for Authors: My main questions/comments are as follows:
1. (Related Work Section) Methods that learn embeddings using sequence and structure input are also multimodal. These should also be properly reviewed in the related works section. Also, probably it should highlighted that in the proposed work the multimodality is sequence and text, and accordingly compare the proposed method with the other sequence-text model ProtST used for comparison.
2. (Baseline Comparison) ProtST is also a multimodal framework, but its results are much lower when compared to MMSite. Any reasoning for the large difference in performance?
3. (Lines 129-130, Line 149) I would suggest explaining already in the Introduction that during inference only the protein sequence is the input, and also that the proposed framework uses the embeddings from pre-trained PLMs and BLMs as part of its methodology.
4. (CLS token) I would suggest adding a reference on why only the CLS token is used for part of the attributes in the MACross module.
5. (Neural Network Architecture) It is very hard to understand the dimensions after each module of the architecture, which makes it hard to understand the intuition behind modules such as the MACross and the inter-modality attention. Because of this lack of sufficient information for the reader, it is hard to understand sentences such as the one in Line 179: "Inspired by [19], we adopt cross-modal soft-label alignment to make one-hot label continuous.".
6. (Neural Network Architecture) In Fig. 2 during the fusing step, there is a skip concatenate and prediction module that also takes as input the embeddings from the sequence-based feature extractor. How does this skip, concatenate, and prediction module work? The reviewer is not able to understand this part with the explanation in lines 189-201.
7. (Table 1) For the results in Table 1, how the baseline methods were trained. It is mentioned that a prediction head using a Transformer architecture is used. Is the embeddings for them extracted and then a prediction head trained using the proposed dataset? For the standard deviation how many times each method was trained with different random seeds?
8. (Reproducibility) Is code available in an anonymized repository for reviewers or will be available later upon acceptance?
9. (Prompt Ablation) Is there a prompt ablation or some evidence that the manual prompting is necessary to input the text descriptions to the BLM?
10. (Lines 176-180) It is not clear how the use of the KL divergence of the cosine similarities solves the problem of the possibility of "similar protein sequences giving rise to similar structures and functions" in a given batch. Can you give additional intuition regarding the modeling of this loss function for the alignment phase?
11. (Dataset splits) It is observed in the results that even for clustering thresholds as low as 10% the proposed method generalizes better than other methods, while for higher thresholds performance is similar to ESM. It seems that the text description conditions the model to learn the right site for a given function, acting as an implicit condition in this case based on the performance of the pre-trained BLM. It seems as a weakness for the design of de novo methods when compared to surface methods like MaSIF or sequence-structure methods like ScanNet and PeSTO. How would the proposed method be compared to these methods in terms of binding site interface prediction and generalization to de novo proteins?
Minor Comments:
1. Acronyms: PLMs need to be defined in line 32 of the manuscript.
2. Writing: There are issues in writing. Typos occur, e.g. line 133 "serveral", line 223 "will be removed", line 105 "Sites". There are expressions and words that would be better rewritten in a descriptive manner, e.g. "obvious", "near perfection", "it is clear", "after many attempts". The sentence in lines 114-115 should be rewritten.
3. Notations: The notation for sequence in line 124 could follow the same pattern as the attribute in line 126 and the annotation target in line 127.
4. Figures: Figure 1 does not seem to be mentioned in the text. The blue arrows for inference in Figure 2 are confusing and it is hard to grasp the dimensions obtained after each block. The labels in Figure 3 seem to be confusing and Figure 8 lacks proper captions.
5. References/Format: references should follow the right format, both in text and in the reference section.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors address potential negative societal impacts and briefly explore the limitations of the proposed methodology in the Appendix. The paper would benefit by providing the source code for reviewers to evaluate the reproducibility of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your valuable feedback and constructive suggestions. Below is our point-by-point response to your comments.
**Re - Weaknesses 1**
It's true that MMSite rely on an external generative model to produce textual context for inference. This decouples the training of protein-to-text models from that of the active site prediction model. Training MMSite on top of the pretrained BLM and PLM models is very energy-efficient, so we could easily get a better active site prediction model once a better protein-to-text model is available.
**Re - Weaknesses 2**
We apologize for the confusion.
In Section 4.1, we have described the implementation of each component of MMSite, and covered the training strategy for baseline methods in Lines 236-238.
**Re - Weaknesses 3**
When the split strategy is harsh, the baseline methods couldn't generalize, whereas MMSite can find meaningful patterns from both sequence and text and achieve good performance. We think their performances are closer at 50% threshold because (1) the dataset is easy enough for the baseline models to generalize, and (2) the performance is already very high, leaving little room for improvement. We will add this discussion to the revised paper.
**Re - Questions 1**
Thank you for your valuable suggestions. We will include a review of sequence-structure multimodal methods in the Related Work section, and emphasize the modalities of MMSite and compare it with similar sequence-text multimodal methods.
**Re - Questions 2**
While ProtST is a protein-text multimodal model, it aligns global sequence-level representations with textual descriptions, overlooking individual amino acid features, which reduces their effectiveness in residue-level tasks. In contrast, MMSite integrates text-empowered sequence representations directly at the token level, preserving the original features of amino acids. This ensures the model can benefit from the information provided by the text descriptions.
**Re - Questions 3**
In Lines 61-62, we have covered the inference strategy, but did not emphasize the source of embeddings, which may cause confusion. We will clarify both points in the future version. Thank you for your suggestion.
**Re - Questions 4**
The [CLS] token is used to aggregate the sequence-level representation of the entire input sentence, as first proposed in BERT. We will add this reference in the future version.
**Re - Questions 5**
We apologize for the confusion. Due to the character limit, we have provided detailed explanations about the dimensions in the **Author Rebuttal**. For Line 179, please refer to our response in the Re - Question 10 section.
**Re - Questions 6**
During the fusing step, we concatenate the text-enhanced sequence features with the original sequence features along the feature dimension. This combined feature is then fed into the prediction module to produce the final predictions. Detailed dimensions are provided in the **Author Rebuttal**. Additionally, for newly discovered proteins that lack text descriptions, we use an agent model to generate the necessary texts, which are then used alongside the sequences.
**Re - Questions 7**
Yes, we first extract amino acid-level embeddings from the baseline models, and then use a Transformer, followed by an MLP and a Sigmoid function to obtain active site probabilities. For the standard deviation, we trained each method 5 times with different random seeds.
**Re - Questions 8**
We have already included well-organized code, dataset and detailed instructions for replication in a zip file as **Supplementary Material**.
**Re - Questions 9**
We have tested the effect of removing manual prompting. The comparison is shown below:
|Method|F_max|AUPRC|MCC|OS|FPR|
|:-:|:-:|:-:|:-:|:-:|:-:|
|w manual prompting|**0.8250**|0.8909|**0.8319**|**0.8549**|**0.1689**|
|w/o manual prompting|0.8157|**0.8911**|0.8221|0.8460|0.1793|
The results show that manual prompting improves performance on most metrics. We believe this is because: (1) complete sentences provide richer context for the BERT-based BLM, (2) it reduces ambiguity in attribute meanings, and (3) it aligns better with BLM's pretraining, leveraging its knowledge more effectively.
**Re - Questions 10**
Traditional hard-label alignment assigns positive pairs a label of 1 and negative pairs a label of 0, pushing them apart in high-dimensional space. This isn't ideal for similar protein sequences $S_i$ and $S_j$, whose corresponding texts $T_i$ and $T_j$ are also similar. Thus, pushing $S_i$ away from $T_j$ may not be reasonable. To address this issue, we align the distribution $Q_i^{\rm s2t}$ towards $P_i^{\rm s2s}$ and $Q_i^{\rm t2s}$ towards $P_i^{\rm t2t}$ using KL divergence in Equation 8. This allows for a more nuanced alignment that respects the inherent similarities within the sequence and textual domains.
**Re - Questions 11**
Surface-based and sequence-structure methods require structural information, which can be costly to obtain. In contrast, functional information may be more accessible for de novo proteins.
Empirically, MMSite performing better at low clustering threshold demonstrates its generalization capabilities, but does not imply it is ineffective to de novo proteins.
That said, we have not evaluated MMSite on de novo proteins, so we are not 100% sure.
**Re - Questions Minor Comments**
We will address all the issues you point out in the revised version to ensure they are correct and clear. Specifically:
- For Figure 2, please refer to our newly submitted PDF.
- For Figure 3, we use three colors to show the model's results: green for correctly predicted sites, blue for sites not predicted, and red for incorrectly predicted sites.
- For Figure 8, we will add more details the caption: "Each subfigure caption is the protein's Entry ID in the UniProt database. The colors mean the same as in Figure 3."
Thank you again for your time and consideration.
Best regards,
Authors
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: Thank you for your detailed responses and additional experiments.
I think my concerns and comments are mostly addressed.
As additional minor comments:
1. Writing comments addressed by other reviewers should be addressed.
2. I suggest modifying the captions in Fig. 3 and 8 to clarify the meaning of each color.
3. Regarding Question 11, I still feel that the dataset for pre-training the pLM and BLM has an influence on the generalization of MMSite. Does the performance of Prot2Text is one of the bottlenecks for the application of this methodology to de novo proteins?
I have increased my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
We sincerely appreciate your positive feedback and thoughtful suggestions.
We will make sure to address the writing issues and modify the captions to further improve the clarity and quality of our manuscript.
Regarding Question 11, we agree that the dataset used for pre-training the PLM and BLM can significantly influence the generalization of MMSite. The performance of Prot2Text indeed plays a crucial role in the application to de novo proteins. While Prot2Text is effective in generating textual descriptions, its performance could affect the accuracy of predictions for proteins without existing annotations. We will continue to explore ways to mitigate this and improve the robustness of our approach in such scenarios.
Thank you again for your constructive suggestions and for increasing your score.
Best regards,
Authors | Summary: This paper introduces MMSite, a multi-modal framework to improve the identification of active sites in proteins by leveraging protein sequences and textual descriptions. The authors build the ProTAD, a dataset containing over 570,000 pairs of protein sequences and detailed textual descriptions. The MMSite uses a “First Align, Then Fuse” strategy to align textual descriptions with protein sequences and fuses the modalities to enhance the identification of active sites. It also employs manual prompting, MACross module and soft-label alignment during the alignment phase. Based on this framework, MMSite outperforms existing protein representation learning methods on several metrics.
Strengths: This paper is well-written and clearly structured, making it easy to read and understand. The paper proposes an important task in the field of biological science. Faced with the scarcity of per-residue labeled data, the authors innovatively employ detailed textual descriptions to assist evolutionary scale models in the identification of active sites. The carefully designed framework addresses the challenges of integrating multiple data modalities through a “First Align, Then Fuse” strategy. Experimental validation results show the state-of-the-art performance of MMSite compared to other baselines, and demonstrate the robustness of the framework. In addition, the authors build a large dataset covering a wide range of protein attributes, which is helpful for subsequent protein related research.
Weaknesses: 1. The framework relies on a complex multi-modal integration mechanism, and in the reference stage, an agent model is used to generate text modality, which may lead to high computational costs.
2. In the last raw of Tables 1, 8 and 9, MMSite also belongs to Seq. & Text model, but it is separated from that cell. In Table 4, the way of indicating different training strategies by different colors is not mainstream.
3. Some specific details in this paper are not clearly explained, as listed in the Questions part.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In Tables 1, 8 and 9, for ProtST (w/ and w/o retrain), which is also a protein-text multimodal model, why does it perform so much worse than MMSite?
2. In Section 3.2, you “adopt cross-modal soft-label alignment to make one-hot label continuous”, but in the reference [19] you cite, the original authors also adopted another uni-modal soft-label alignment, which is also useful to improve the performance in the case of soft-label. Have you tried this approach?
3. The ProTAD dataset contains over 570,000 pairs of protein sequences and textual descriptions, but why does your training set have less than 50,000 samples as shown in Table 6?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations mentioned in the main text and appendix. To reduce the potential negative societal impact of the misuse of the model, the authors have restricted the license in the supplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your valuable feedback and constructive suggestions. Below is our point-by-point response to your comments.
**Re - Weaknesses 1**
Thanks for your concern regarding the inference efficiency of our model! During inference, if the protein has corresponding multi-attribute text descriptions, the process is the same as during training. However, if such descriptions are not available, we use the Prot2Text agent model to generate the text inputs. Below is a comparison of MMSite inference time (seconds per sample) between using pre-existing text descriptions from ProTAD and generated text with an agent model. For further context, we also compare against the BioMedGPT [1], another excellent protein-to-text model. The tests were conducted on a single GPU (NVIDIA GeForce RTX 4090) and a CPU (Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz).
|Source of Text|Average GPU time (s/sample)|Average CPU time (s/sample)|
|:-:|:-:|:-:|
|ProTAD|0.1336|1.7523|
|Prot2Text generation|0.9044|4.2963|
|BioMedGPT generation|5.1136|71.1690|
Additionally, we also compared the model performance using text generated by Prot2Text and BioMedGPT:
|Source of Text|F_max|AUPRC|MCC|OS|FPR|
|:-:|:-:|:-:|:-:|:-:|:-:|
|Prot2Text generation|**0.8250**|0.8909|**0.8319**|**0.8549**|**0.1689**|
|BioMedGPT generation|0.8230|**0.8921**|0.8304|0.8540|0.1693|
As the comparison shows, while using Prot2Text does increases the inference time of ProTAD, it provides comparable performance to BioMedGPT with a much smaller inference time cost. We will add this discussion to the revised paper.
**Re - Weaknesses 2**
Thank you for your careful review and constructive suggestions! You are correct that MMSite should be classified as a Seq. & Text model, and we will revise the tables accordingly. We will address these issues in future versions. Thank you again for your valuable feedback.
**Re - Weaknesses 3**
Please refer to the responses in Re - Questions 1-3 sections.
**Re - Questions 1**
Thank you for your insightful question! While ProtST is also a protein-text multimodal model, it aligns global sequence-level features with textual descriptions during training, rather than focusing on token-level features. This approach overlooks individual amino acid characteristics, causing deviations that reduce their effectiveness in reflecting inherent properties, ultimately resulting in poorer performance. In contrast, MMSite integrates text-empowered sequence representations directly at the token level. By using the MACross module and skip concatenation, we preserve the original features of amino acids while enriching them with relevant textual context. This approach ensures that amino acid representations maintain their intrinsic properties and benefit from the additional information provided by the text descriptions.
**Re - Questions 2**
Thank you for your careful review of our paper! In the reference you mentioned (Prot2Text), the original authors adopted both cross-modal and uni-modal soft-label alignment. The uni-modal alignment was proposed to address the issue where, despite alignment between two modalities in a cross-modal setting, the feature distribution within the same modality could deviate. This deviation may cause non-corresponding samples from the same modality to be very close in the feature space, leading to incorrect retrieval. However, in our task, retrieval is not a concern, and incorporating an additional uni-modal alignment loss could lead to suboptimal performance. To evaluate this, we conducted a comparison experiment:
|Method|F_max|AUPRC|MCC|OS|FPR|
|:-:|:-:|:-:|:-:|:-:|:-:|
|w/o uni-modal soft-label alignment|**0.8250**|**0.8909**|**0.8319**|**0.8549**|**0.1689**|
|w/ uni-modal soft-label alignment|0.8095|0.8739|0.8172|0.8416|0.1829|
As shown in the table, adding uni-modal soft-label alignment resulted in decreased performance across all metrics. Therefore, we opted not to include this additional alignment in our final approach.
**Re - Questions 3**
Thank you for your insightful question! We built the ProTAD dataset based on UniProt, and it contains more than 570,000 pairs of protein sequences and textual descriptions. To ensure diversity and avoid redundancy, we used MMSeqs2 to cluster these sequences based on a sequence identity threshold. We then randomly selected one sequence from each cluster to construct the final dataset (as illustrated in Appendix A.2). Therefore, the size of the training set is determined by the number of clusters, which in turn depends on the clustering threshold.
Thank you again for your time and consideration.
Best regards,
Authors
**Reference**
[1] Zhang K, Yu J, Yan Z, et al. BiomedGPT: A Unified and Generalist Biomedical Generative Pre-trained Transformer for Vision, Language, and Multimodal Tasks. arXiv, 2023.
---
Rebuttal Comment 1.1:
Title: All of my concerns have been addressed and I like this work
Comment: The authors have made a great effort during the rebuttal. All concerns have been addressed. Consequently, I raised the score and recommended acceptance of this submission.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your thoughtful consideration and positive recommendation. Thank you for your time and effort in reviewing our work. | Summary: The paper constructs a Protein-attribute text dataset, ProTAD, and proposes a multi-modal framework, MMsite, that enhances PLM with biomedical language models. During the inference stage, the authors propose generating biomedical text with Prot2Text, which is then fed into the MMsite module. The paper demonstrates that BLM enhances PLM’s performance on protein active sites identification, which is less studied in protein representation learning settings, through various experiments.
Strengths: - The paper demonstrates that BLM enhances PLM’s performance through various experiments. It can be integrated with different PLMs and BLMs in future studies.
- The cross-modal soft-label alignment and multi-modal fusion for prediction is a novel contribution that could be adopted for the fusion of diverse models.
Weaknesses: - Experiments are mainly performed on one dataset. More general protein-level tasks or residue-level tasks should be included, such as protein fitness prediction, protein localization prediction, protein function annotation, and binding site prediction.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. **Generalizability**: Will the MMsite model help with other general protein representation learning tasks? We hope to gain more insights into the incorporation of biomedical textual data.
2. I believe that the ProtDescribe [1] dataset has already built a protein-text dataset from Swiss-Prot. Could you explain the necessity and advantage of your work?
3. In the experiment table, ProtST [1] is not better than non-finetuned ESM? Normally, ProtST-induced PLMs outperform the vanilla PLMs. Could you provide an explanation?
4. On line 176, you mentioned, “However, in our case, there may be a potential semantic association between unpaired sequences and textual descriptions in the same batch due to the principle ‘Similar protein sequences give rise to similar structures and functions’ in biology.” Could you elaborate on how you curate the batches?
5. During inference, the Prot2Text model uses an ESM2 model and a GPT-2 decoder, which I suppose would be time-consuming. I hope the authors can discuss this issue.
[1] Xu M, Yuan X, Miret S, et al. Protst: Multi-modality learning of protein sequences and biomedical texts[C]//International Conference on Machine Learning. PMLR, 2023: 38749-38767.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Please refer to Weaknesses and Questions sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your valuable feedback and constructive suggestions. Below is our point-by-point response to your comments.
**Re - Weaknesses**
To the best of our knowledge, ProTAD is currently the only dataset that simultaneously includes amino acid-level labels and detailed protein textual descriptions, making direct comparisons with other datasets challenging. Regarding the general protein-level tasks you mentioned, our MMSite framework incorporates comprehensive and rich textual descriptions of proteins as input. These descriptions inherently contain information about function and/or localization, making evaluations on these tasks somewhat unreasonable. In contrast, the annotation of active sites is not explicitly included in the input, which justifies the focus on active site prediction in our work. As for binding site prediction, it is not suitable to rely solely on protein sequence, as the binding sites often depend on specific interactions with various small molecules, which are not included in our model's input.
**Re - Questions 1**
Please refer to our response in the Re - Weaknesses section above.
**Re - Questions 2**
Thank you for mentioning an important related work! Here we compare our ProTAD dataset with ProtDescribe:
Firstly, ProtDescribe includes 4 text attributes, whereas ProTAD comprises 17 text attributes, providing more comprehensive and detailed information.
Secondly, ProTAD includes fine-grained amino acid-level labels, which are not present in ProtDescribe. These labels are crucial for the residue-level tasks such as active site identification from a multimodal perspective.
**Re - Questions 3**
ProtST-induced PLMs do indeed outperform vanilla PLMs in many downstream, sequence-level tasks, such as protein localization prediction and function annotation.
This is because ProtST aligns the protein-level sequence embeddings with text during training, rather than token-level features, causing the representations to focus on the global function of the protein.
We suspect that by doing so, the model loses sharpness in the individual characteristics of each amino acid, which reduces their effectiveness in residue-level tasks.
**Re - Questions 4**
In fact, we follow the standard practice of randomly sampling each batch from the training set. However, considering this biological principle, we use soft-label alignment to address the limitations of traditional hard-label alignment (such as CLIP [1]), where positive pairs are assigned a label of 1, and negative pairs are assigned a label of 0. The model then “pulls close” the positive pairs and “pushes away” the negative pairs in the high-dimensional space with equal emphasis to each pair. However, for similar protein sequences $S_a$ and $S_b$, their corresponding texts $T_a$ and $T_b$ may also be similar. Thus, pushing away $S_a$ and $T_b$ may not always be reasonable. To mitigate this issue, we employ soft-label alignment, which aligns the distribution $Q_i^{\rm s2t}$ towards $P_i^{\rm s2s}$ and $Q_i^{\rm t2s}$ towards $P_i^{\rm t2t}$ as described in Equation 8. This approach allows for a more nuanced alignment that respects the inherent similarities within the sequence and textual domains.
**Re - Questions 5**
Thank you for your valuable suggestion!
During inference, if the protein has corresponding multi-attribute text descriptions, the process is the same as during training. However, if such descriptions are not available, we use the Prot2Text agent model to generate the text inputs. Below is a comparison of MMSite inference time (seconds per sample) between using pre-existing text descriptions from ProTAD and generated text with an agent model. For further context, we also compare against the BioMedGPT [2], another excellent protein-to-text model. The tests were conducted on a single GPU (NVIDIA GeForce RTX 4090) and a CPU (Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz).
|Source of Text|Average GPU time (s/sample)|Average CPU time (s/sample)|
|:-:|:-:|:-:|
|ProTAD|0.1336|1.7523|
|Prot2Text generation|0.9044|4.2963|
|BioMedGPT generation|5.1136|71.1690|
We also compared the model performance using text generated by Prot2Text and BioMedGPT:
|Source of Text|F_max|AUPRC|MCC|OS|FPR|
|:-:|:-:|:-:|:-:|:-:|:-:|
|Prot2Text generation|**0.8250**|0.8909|**0.8319**|**0.8549**|**0.1689**|
|BioMedGPT generation|0.8230|**0.8921**|0.8304|0.8540|0.1693|
As the comparison shows, while using Prot2Text does increases the inference time of ProTAD, it provides comparable performance to BioMedGPT with a much smaller inference time cost. We will add this discussion to the revised paper.
Thank you again for your time and consideration.
Best regards,
Authors
**Reference**
[1] Radford A, Kim J W, Hallacy C, et al. Learning Transferable Visual Models From Natural Language Supervision. ICML, 2021.
[2] Zhang K, Yu J, Yan Z, et al. BiomedGPT: A Unified and Generalist Biomedical Generative Pre-trained Transformer for Vision, Language, and Multimodal Tasks. arXiv, 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed responses. I believe some of my concerns have been addressed, and I have raised my score to 6.
However, I still feel that the work's originality compared to previous research is marginal.
---
Reply to Comment 1.1.1:
Comment: We are pleased to hear that the rebuttal has addressed your concerns and appreciate that you have raised your score!
While our approach of using text descriptions to enhance protein representations may seem similar to previous research, our innovation lies in the integration of protein sequences and biomedical texts for fine-grained, **token-level** prediction of active sites. To the best of our knowledge, this is the first work to tackle this task in such a manner. Given the abundance of text available in the biomedical domain, we believe that our work has the potential to inspire further impactful research in this direction and lead to breakthroughs in important but data-scarce tasks. | Summary: The paper proposes a framework to improve the active site prediction for the protein representations by fine-tuning the model on the text function descriptions.
Strengths: * The paper addresses active site prediction, which is an interesting and understudied problem in protein science.
* The study compares its proposed method with numerous baselines.
Weaknesses: * Limited Method Novelty: The paper's core idea of using text descriptions to enhance protein representations lacks originality. Similar approaches have been previously explored in works such as ProtST (Xu 2023) and ProteinCLAP (Liu 2023). Given that this is the major component of the paper, the method appears incremental.
* Insufficient Motivation: The paper fails to provide a clear rationale for training on 17 different types of function annotations to improve active site predictions. Many of these annotations, such as organism information, seem irrelevant to the task at hand. Furthermore, the paper does not adequately explain how protein-level annotations are expected to benefit amino acid-level predictions.
* Lack of Temporal Evaluation: While the paper adopts a structural split for evaluation, which is acceptable, a temporal-based evaluation would be more ideal and realistic. A temporal split, where some proteins are held out based on their discovery time, would more accurately reflect real-world scenarios in scientific applications.
* Poor Writing Quality: The overall writing quality of the paper is subpar, particularly in the methods section. The explanation of methodologies is convoluted and difficult to follow.
Technical Quality: 2
Clarity: 2
Questions for Authors: ProtST is also trained on text descriptions. What do you think makes your method outperforms ProtST?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your valuable feedback and constructive suggestions. Below is our point-by-point response to your comments.
**Re - Weaknesses 1**
We agree that using text descriptions to enhance protein representations may seem similar to existing works. However, our innovation lies in the integration of protein sequences and biomedical texts for the **token-level** prediction task of active site identification. Experiments show that our method outperforms existing multi-modal methods for sequence-level prediction tasks. As there are abundant text available in the biomedical domain, we anticipate our work will inspire more impactful research in this direction and lead to breakthroughs in important but data-scarce tasks.
**Re - Weaknesses 2**
Regarding the motivation of MACross, it is based on our judgement that the Function attribute is the most relevant attribute for predicting active sites and contains the richest information. Therefore, in our MACross module, we prioritize the Function attribute and integrate the remaining annotations through cross-attention mechanisms. This ensures the rich information from the Function attribute is complemented by other annotations.
The additional attributes, although seemingly insignificant, also contribute to improve active site predictions. For example, subcellular location provides context about protein accessibility and potential interactions, critical for identifying active sites. Similarly, organism information offers evolutionary context, highlighting conserved or mutated motifs. The table below shows the performance comparison between using only the Function attribute and using all attributes, demonstrating that incorporating all attributes leads to better performance.
|Textual Description|F_max|AUPRC|MCC|OS|FPR|
|:-:|:-:|:-:|:-:|:-:|:-:|
|All attributes|**0.8250**|**0.8909**|**0.8319**|**0.8549**|**0.1689**|
|Only Function attribute|0.8152|0.8866|0.8231|0.8471|0.1764|
Regarding how protein-level annotations help, we apologize for not having explained this clearly in the paper. In the rich study (i.e. abundant text) of the function of proteins, scientists have paied special attention to the identification, structural analysis, engineering of active sites. These valuable information are stored not in protein sequences but in biomedical texts. We expect BLMs to retrieve this information given protein-level annotations, providing context for active site prediction.
**Re - Weaknesses 3**
We appreciate your suggestion for a temporal-based evaluation. To address this, we conducted an additional experiment simulating the discovery of new proteins. Our ProTAD dataset includes data up to March 11, 2024. We collected data from UniProt recorded after this date, representing newly discovered proteins (115 samples with active site labels), to evaluate the model's performance. The results show that MMSite maintains high performance even on newly discovered proteins, as shown in the table below.
|Test Data|F_max|AUPRC|MCC|OS|FPR|
|:-:|:-:|:-:|:-:|:-:|:-:|
|Newly discovered proteins|0.8432|0.8865|0.8460|0.8465|0.1420|
**Re - Weaknesses 4**
We apologize for the shortcomings in our writing. Here we provide a clearer explanation of our methodologies in the Methods Section:
- In Section 3.1, we formulate our task of predicting active sites in proteins using both protein sequences and multi-attribute textual descriptions.
- In Section 3.2, we detail our methodology through four main steps:
- **Attribute description reconstruction with prompt:** We use manual prompting to reconstruct multi-attribute text descriptions into a format processable by BLMs.
- **Modality feature extraction:** We extract features from both protein sequences and textual descriptions using a PLM and a BLM, respectively. In the MACross module, we separately process the Function attribute and integrate it with the remaining attributes using cross-attention.
- **Cross-modal soft-label alignment:** Our two-stage training strategy, "First Align, Then Fuse", uses soft-label alignment to align features from two modalities. This method differs from the traditional contrastive approaches that "pull close" positive pairs and "push away" negative pairs, which may not be suitable when there is significant similarity between different sequences or texts in a batch.
- **Multi-modal fusion and active site identification:** We then fuse the two modalities through fusion attention and skip concatenation to predict active sites. In addition, since newly discovered proteins may lack corresponding text descriptions during inference, we use an agent model to generate the necessary texts, which are then used alongside the sequences.
Thank you for the opportunity to present our writing structure, and we will reorganize the content of this section to enhance the clarity and readability of our work in future versions.
**Re - Questions**
While ProtST also uses text descriptions, it aligns global protein-level sequence representations with textual descriptions during its training stage, causing the representations to focus on the global function of the protein. This training approach overlooks individual amino acid features, which reduces their effectiveness in residue-level tasks. In contrast, MMSite integrates text-empowered sequence representations directly at the token level. By using the MACross module and skip concatenation, we preserve the original features of amino acids while enriching them with relevant textual context. This approach ensures that amino acid representations maintain their intrinsic properties and benefits from the added information provided by the text descriptions.
Thank you again for your time and consideration.
Best regards,
Authors
---
Rebuttal 2:
Comment: Thank you for addressing some of my concerns. I've updated my score to reflect these clarifications. However, I still have two main concerns:
Novelty: The work's originality compared to previous research still feels marginal.
Dataset motivation: The rationale for collecting such a large dataset with 17 attributes is not fully justified, especially given the experimental results. The new experiment shows that training on the function attribute alone achieves similar performance to using all 17 attributes. This raises questions about the necessity and efficiency of the full dataset.
---
Rebuttal Comment 2.1:
Comment: We are glad to hear that the rebuttal addresses your concerns, and also glad that you have raised your score. Regarding the two points you mentioned:
**Novelty:** In this work, we integrate protein sequences with biomedical text to achieve fine-grained, **token-level** prediction of active sites, which, to the best of our knowledge, is the first to tackle this task in such a manner. We believe that our approach of using text descriptions to enhance protein representations for residue-level learning has the potential to inspire further impactful research in the biomedical domain.
**Dataset motivation:** While the new experiment shows similar performance, training with all attributes still yields better results across all metrics. Our rationale for collecting such a large dataset was to capture a broader range of contextual information, which can be beneficial in more complex scenarios or when specific attributes provide unique insights not captured by the Function attribute alone. In addition, a rich dataset with diverse attributes provides greater flexibility for future experiments and allows for fine-tuning to explore how these attributes interact and contribute to performance in different contexts.
We will continue to refine our work in the future. Thank you again for your valuable comments! | Rebuttal 1:
Rebuttal: Dear Reviewers,
We wish to extend our sincere gratitude for your time and insightful feedback on our manuscript. Your insights are greatly helpful in improving the quality and clarity of our work. The following is a summarized response to some of the valuable suggestions and common issues raised.
**1. Evaluation Suggestions**
Many reviewers' considerations regarding performance evaluation are valuable, including:
- Reviewer **9yAm**'s concern on the comparison with traditional methods, and the quality of texts generated by Prot2Text.
- Reviewer **CdPy**'s concern on the significance of different text attributes and performance on newly discovered proteins.
- Reviewers **zjg9** and **LZPM**'s concern on the model’s computational costs during inference, and the construction of loss function.
- Reviewer **sxh3**'s concern on the necessity of manual prompting.
We will include these validation experiments in the revised version of the paper to address these valuable insights.
**2. Comparison with ProtST**
- Reviewers **CdPy**, **zjg9**, **LZPM**, and **sxh3** raised questions about ProtST's performance compared with MMSite. ProtST aligns global protein-level sequence representations with text, missing individual amino acid details. MMSite, however, pay more attention to integrate text-empowered sequence representations with amino acid features, which preserves the intrinsic properties of amino acids and benefits from the additional information provided by text descriptions.
**3. Performance at Different Clustering Thresholds**
- Reviewers **9yAm** and **sxh3** noted that MMSite's performance is similar to baseline models at high clustering thresholds. This occurs because a higher clustering threshold results in more similarity between the training set and the test set. In a harsh split strategy, MMSite can identify meaningful patterns from both sequence and text, whereas baseline models cannot. Additionally, when the threshold is high, sequence-only models perform very well already, leaving little room for MMSite to improve further.
**4. Figures and Additional Clarifications:**
- Reviewers **9yAm** and **sxh3** suggested improvements for Figures 1 and 2. We have updated these figures in the uploaded PDF to more clearly reflect our work's aspects and MMSite's workflow.
- Additionally, we provide detailed dimension changes for each module to enhance understanding:
- *Stage 1: Feature Extraction*
- **PLM** extracts sequence features: $N^{\rm s}\times d^{\rm s}$ (sequence length × sequence dimension).
- **BLM** extracts text features: $\\{N_i^{\rm t}\times d^{\rm t}\\}_{i=1}^{M}$ (length of $i$th attribute text × text dimension, for $M$ attributes).
- A **Linear** layer maps the sequence features to $N^{\rm s}\times d^{\rm t}$.
- In **MACross**, the [CLS] tokens of $M-1$ attributes (excluding Function) are concatenated to form a vector of dimension $(M-1)\times d^{\rm t}$ after **Inter-attribute Fusion**. Overall, MACross outputs a vector of dimension $(M-1)\times d^{\rm t}$.
- *Stage 2: Semantic Alignment*
- The **Shared Transformer Encoder** outputs the sequence and text features of dimension $N^{\rm s}\times d^{\rm s}$ and $(M-1)\times d^{\rm t}$, respectively.
- The first dimension is then averaged along for both modalities to obtain two vectors of $1\times d^{\rm t}$ for similarity calculation.
- *Stage 3: Fusion and Prediction*
- The **Fusion Attention** outputs the text-enhanced sequence representation with dimension $N^{\rm s}\times d^{\rm t}$.
- The original sequence features are concatenated along the second dimension with the text-enhanced sequence, resulting in $N^{\rm s}\times (d^{\rm s}+d^{\rm t})$.
- The **Active Site Prediction Head** maps this to $N^{\rm s}\times 2$ using an MLP.
Finally, we appreciate your detailed suggestions on acronyms, writing quality, notation, and figures. We will carefully follow your suggestions in the revised version to ensure they are correct and clear.
Thank you again for your valuable feedback.
Best regards,
Authors
Pdf: /pdf/1e58161df07787b2a5dc0c1c9dae1cf85fda6da9.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper studies the problem of identifying active sites in proteins.
The paper proposes a novel multi-modal framework, MMSite that aligns and fuses textual and sequential modalities of protein sequences ("First Align, Then Fuse" strategy). The authors leverage pre-trained protein language models and biomedical language models for extracting features from protein sequences and attribute textual descriptions.
For this training, they introduce their newly constructed ProTein-Attribute text Dataset (ProTAD) and a MACross attention module for combining multiple attributes.
Via empirical evaluation, they show that MMSite can significantly enhance the identification of active sites in proteins compared to existing protein representation learning methods, e.g., ESM, and ProtST. The paper also provides a set of ablation studies for a better understanding of the effect of frameworks' components and configuration.
Strengths: 1. Their construction of a large-scale, multi-modal dataset for ProTein-Attribute association is valuable for further research in this field.
2. The multi-modal framework MMSite is somewhat novel in the context of active site prediction. This idea of multi-modal has been explored recently for protein studies, e.g., ProteinCLIP (https://www.biorxiv.org/content/10.1101/2024.05.14.594226v1)
3. The paper demonstrates significant performance improvement on the evaluation testbed.
4. This paper provides a good and thorough set of ablation studies and examples (in the Appendix) that help understand the framework and its working scenarios.
Weaknesses: My main concern is about the **evaluation**:
1. Baselines: the paper only compares the framework PRL adapted with four trainable Transformer layers which are smaller in size compared to the MMSite and not designed for the task. Furthermore, there are some PLMs with better performance than the baselines, e.g., SaProt (https://www.biorxiv.org/content/10.1101/2023.10.01.560349v5.full.pdf)
How about the existing active sites prediction methods without the need for text mentioned in the paper? e.g., classical methods (Random Forest and Support Vector Machine), statistics-based approaches, DeepSurf, ... or methods using structure instead of text?
2. Dataset: The dataset (ProTAD) might have inherent biases that are not discussed in the paper. If the dataset is not diverse enough, the model may perform well on the dataset but fail to generalize to other proteins. If the paper can provide another dataset, that would me more convincing.
3. In table 9, for dataset with clustering threshold at 50%, we observe the base PLM models are close to MMSite and can outperform MMSite in some metrics. So, is that the case MMSite heavily depends on the base model and works well only with low clustering thresholds?
**Methods:**
* The novelty is somewhat limited in terms of machine learning methodology as the paper heavily relies on existing methods like PLMs, BLMs, and cross-attention mechanisms. The use of soft-label alignment is applied in a new context.
From my understanding, besides the new dataset, the main novelty is the design of the MMSite framework for the biology application.
* The model works with short sequence (512 amino acids)
**Claims:**
* The claim that their method "enhances the identification capabilities of PLMs" should be elaborated more clearly to the context as the MMSite, to my understanding, does not improve the model, rather than just use it.
* Some biology claims are without reference and need further explanation, especially for readers with primary machine learning background. For examples, (lines 177--179) "... due to the principle “Similar protein sequences give rise to similar structures and functions” in biology."
**Minors:**
* Some figures may need to be improved, both in quality and presentation. For example, in figure 1, the dash lines are not well aligned and the introduction of "Mark Zuckerberg" figure seems not necessary.
* "soft-label" and "soft label" inconsistency format
* In figure 2, $f_{\Phi}$ is denoted PLM, but seem like $f_{\phi}$ is used in the main text.
* Typos:
* Line 105, "Sites" -> "sites"
* Fig1 caption: "mutual tasks", is that "multi-modal"?
* abstract: "life science" -> the life sciences
Technical Quality: 3
Clarity: 3
Questions for Authors: Together with the above questions, I have the following questions,
1. What is the accuracy/performance of the Pro2Text on the dataset? We can test this on the proteins that the text exists.
2. How does the MMSite perform when we use Pro2Text to generate texts for training instead of the textual description from the dataset?
3. For text processing, how do you process the Nan attributes? Are you using Pro2Text to fill in?
4. For the alignment phase, what happens if some proteins in the batch are 10% different while their texts are similar?
5. In Figure 1, the MACross is skipped when texts are not available and in Table 3, the gain of MACRoss seems marginal. Have you evaluated the model without MACross and used the prompt to connect multiple attributes instead?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No.
The paper lacks a discussion on the limitations and impact, especially for their practical applications, e.g., discussions regarding the potential misuse of the technology in drug design and bioengineering.
The paper should discuss the scenarios when MMSite works well, e.g., dependency on the base PLM, agent models, and clustering threshold.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your valuable feedback and constructive suggestions. Below is our point-by-point response to your comments.
**Re - Weaknesses Evaluation 1**
Regarding # of trainable params, the difference between MMSite and baselines is not very large, as the former has 9 trainable Transformer layers for sequence and the latter have 4. We are running experiments with 9 layers for baselines for fair comparison and will report results once they finish.
Regarding comparison with other baselines, we conducted experiments with SaProt, Random Forest (RF), Support Vector Machine (SVM), and DISCERN (a stat-based method designed for active site prediction). The results are shown below, with MMSite significantly outperforming them.
||F_max|AUPRC|MCC|OS|FPR|
|:-:|:-:|:-:|:-:|:-:|:-:|
|MMSite|**0.8250**|**0.8909**|**0.8319**|**0.8549**|**0.1689**|
|SaProt (650M-AF2)|0.7181|0.8562|0.7259|0.7480|0.2731|
|RF|0.4339|0.5545|0.4382|0.3137|0.2825|
|SVM|0.4270|0.4708|0.4017|0.3409|0.4000|
|DISCERN|0.2870|0.4539|0.2949|0.1921|0.4176|
For DeepSurf, it is specifically designed for predicting binding sites with molecules, which is not applicable to our task. Besides, we have already compared methods that use structure (e.g. MIF and PST).
**Re - Weaknesses Evaluation 2**
We believe ProTAD is diverse enough, as it is derived from Swiss-Prot, referred to as "a reference resource of protein sequences and functional annotation that covers proteins from all branches of the tree of life" [1]. All Swiss-Prot data with both active site and function annotations are included without cherry-picking. The split strategy is strict (10% sequence similarity), which challenges the generalization ability.
**Re - Weaknesses Evaluation 3**
MMSite depends on the base model, but when the split strategy is harsh (e.g. 10% sequence similarity), the base models couldn't generalize, whereas MMSite can find meaningful patterns from both sequence and text to achieve good performance. At 50% threshold, their performances are close because (1) the dataset is easy enough for base models to generalize, and (2) the performance is already very high, leaving little room to improve.
**Re - Weaknesses Methods 1**
We agree that the novelty of MMSite mainly lies in the design of the framework and its application. However, we believe this does not undermine the significance of our work, which is among the first to integrate protein sequences and biomedical texts for fine-grained, residue-level prediction. As there is abundant text available in the biomedical domain, we anticipate our work will inspire more impactful research and lead to breakthroughs in important but data-scarce tasks.
**Re - Weaknesses Methods 2**
In Appendix C.1, we evaluated MMSite across multiple sequence lengths (from 128 to 1024), and the results show its robust and consistent performance.
**Re - Weaknesses Claims**
Thanks for your suggestions. We will revise our claim to clearly reflect how MMSite builds, and provide more context to help readers better understand the underlying biological rationale in the future version.
**Re - Weaknesses Minors**
We have updated Figure 1 in the submitted PDF, and we will correct the words and notations in the future version. Besides, "mutual tasks" refers to the tasks of generating texts from sequence and retrieving sequences from text. We have revised it in the PDF.
**Re - Questions 1**
To evaluate the performance of Prot2Text, we built a dataset of 1k proteins randomly sampled from ProTAD, as shown below. The performance on ProTAD-1k is good and generally consistent with the reported performance, indicating that Prot2Text can readily serve as an agent to generate textual descriptions for proteins.
||BLEU Score|Rouge-1|Rouge-2|Rouge-L|BERT Score|
|:-:|:-:|:-:|:-:|:-:|:-:|
|Prot2Text (reported)|**36.51**|53.68|45.60|51.40|85.20|
|Prot2Text on ProTAD-1k|25.73|**56.56**|**49.46**|**54.15**|**86.57**|
**Re - Questions 2**
The table below compares the performance of MMSite using human-annotated and Prot2Text-generated texts for training. The performance of two settings was similar across most metrics.
|Source of Text|F_max|AUPRC|MCC|OS|FPR|
|:-:|:-:|:-:|:-:|:-:|:-:|
|ProTAD (human annotation)|**0.8250**|0.8909|**0.8319**|**0.8549**|**0.1689**|
|Prot2Text generation|0.8194|**0.8923**|0.8254|0.8433|0.1751|
**Re - Questions 3**
For missing values, we use "unknown" as the textual annotation. (Appendix A.1)
**Re - Questions 4**
Suppose proteins $i$ and $j$ have similar texts but different sequences. The core of the alignment loss is two KL terms, which pushes the distributions $Q_i^{\rm s2t}$ close to $P_i^{\rm s2s}$ and $Q_i^{\rm t2s}$ close to $P_i^{\rm t2t}$. This creates both attractive and repulsive forces between the sequence embedding for $i$ and the text embeddings for $j$, striking a balance between full alignment and full separation. Our soft-label alignment strategy ensures that the model can effectively learn the relationship between sequences and texts, even when the two modalities sometimes give conflicting signals.
**Re - Questions 5**
The ablation study in Table 3 have evaluated this scenario. It shows that adding MACross improves the model's performance across five metrics by about 1.7% on average. The main purpose of MACross is to highlight the importance of the Function attribute, because it contains richer information than others, which is essential for our task. Otherwise, the model may struggle to prioritize the important attributes, leading to suboptimal performance.
**Re - Limitations**
We discussed the limitations and impacts in Appendix D, and restricted the license to mitigate potential misuse in the code. We will discuss more scenarios in the future version.
Thank you again for your time and consideration.
Best regards,
Authors
**Reference**
[1] Coudert E, Gehant S, De Castro E, et al. Annotation of biologically relevant ligands in UniProtKB using ChEBI. Bioinformatics, 2023.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: Thank you for the extra results and detailed responses.
Given the similar performance between Prot2Text-generated texts and human-annotated texts, should we directly use Prot2Text-generated texts which is much more cost-efficient? or do we have any scenarios for which human-annotated texts are preferable?
---
Reply to Comment 1.1.1:
Title: Response and More Evaluation Results
Comment: **Response to Your Question**
Thank you for your question. The table in **Re - Questions 2** compares the performance when trained with human-annotated texts versus Prot2Text-generated texts. While the performance is similar, training with human-annotated texts yields better results across most metrics, as they provide more accurate and comprehensive information. Furthermore, the comparable performance also indicates that Prot2Text is capable of effectively generating textual descriptions, particularly for newly discovered proteins that might lack annotations.
Therefore, we recommend using human-annotated data during training to fully integrate the detailed information provided by these texts. For inference, if human-annotated texts are available, they should be used to better align with the model’s pre-training and leverage the pre-trained knowledge more effectively. Otherwise, Prot2Text-generated texts can serve as a suitable alternative.
**More Evaluation Results for *Re - Weakness Evaluation 1***
In addition, as a supplement to our response for **Weakness Evaluation 1**, we adjusted the number of trainable Transformer layers followed by the baseline models to ensure that the number of trainable parameters is comparable to those in MMSite. The results are shown in the table below:
| Method & Version | F_max | AUPRC | MCC | OS | FPR |
| :--------------------------: | :--------: | :--------: | :--------: | :--------: | :--------: |
| ESM-1b | 0.7119 | 0.7420 | 0.7207 | 0.7492 | 0.2673 |
| ESM-1v | 0.5706 | 0.8776 | 0.5775 | 0.5846 | 0.4053 |
| ESM-2-650M | 0.6936 | 0.7696 | 0.6014 | 0.6211 | 0.3886 |
| ProtT5-BFD | 0.4211 | 0.6727 | 0.4353 | 0.3705 | 0.5702 |
| ProtT5-UniRef | 0.4262 | 0.6762 | 0.5382 | 0.5437 | 0.5652 |
| ProtBert-BFD | 0.5603 | 0.7073 | 0.5663 | 0.5557 | 0.4063 |
| ProtBert-UniRef | 0.4638 | 0.7204 | 0.4885 | 0.5227 | 0.4400 |
| ProtAlbert | 0.6698 | 0.7284 | 0.6727 | 0.6833 | 0.3246 |
| ProtXLNet | 0.0126 | 0.1270 | 0.0364 | 0.0535 | 0.9401 |
| ProtElectra | 0.5869 | 0.6579 | 0.5976 | 0.5891 | 0.5633 |
| PETA-deep_base | 0.5458 | 0.7941 | 0.5504 | 0.5498 | 0.4337 |
| S-PLM | 0.6745 | 0.8443 | 0.6817 | 0.6811 | 0.2950 |
| TAPE | 0.3270 | 0.5978 | 0.3293 | 0.3239 | 0.6598 |
| MIF | 0.1184 | 0.2306 | 0.1316 | 0.2739 | 0.8784 |
| MIF-ST | 0.1218 | 0.2337 | 0.1372 | 0.2646 | 0.8696 |
| PST-t33 | 0.6514 | 0.8312 | 0.6621 | 0.7007 | 0.3271 |
| PST-t33_so | 0.6598 | 0.8163 | 0.6697 | 0.6990 | 0.3195 |
| ProtST (w/ retrain) -ESM-1b | 0.4201 | 0.6518 | 0.4353 | 0.3880 | 0.5829 |
| ProtST (w/ retrain) -ESM-2 | 0.1340 | 0.4660 | 0.1349 | 0.1370 | 0.7688 |
| ProtST (w/o retrain) -ESM-1b | 0.4770 | 0.7321 | 0.4915 | 0.5122 | 0.3419 |
| ProtST (w/o retrain) -ESM-2 | 0.3375 | 0.6982 | 0.5462 | 0.5313 | 0.4136 |
| **MMSite** | **0.8250** | **0.8909** | **0.8319** | **0.8549** | **0.1689** |
This comparison shows that even with a comparable number of trainable parameters, MMSite significantly outperforms other baseline models.
Thank you again for your question and consideration.
---
Reply to Comment 1.1.2:
Comment: We would like to know if you have any additional concerns or suggestions regarding our work. If possible, we hope to engage in further discussion to earn your endorsement. If you are satisfied with our rebuttal and responses to your **Official Comment**, we kindly ask if you would consider raising your score. Thank you very much for your valuable feedback again. | null | null | null | null | null | null |
Self-Refining Diffusion Samplers: Enabling Parallelization via Parareal Iterations | Accept (poster) | Summary: This paper proposes a new method for parallelizing the sampling process of diffusion models, offering a novel way to trade compute for speed. In particular, it relies on the parareal iteration in this setting, where the main idea is to split the solving timesteps into multiple groups and then update them in parallel. To ensure that the output from these timestep groups matches the exact solution, parareal performs predictor-corrector steps after each group is updated. Experiments demonstrate the efficacy of applying parareal iteration in speeding up the sampling process.
Strengths: 1. This paper demonstrates a very interesting direction by introducing the parareal iteration into diffusion models.
2. The experiments indeed demonstrate the efficacy of this approach.
Weaknesses: 1. I think the author misses a rigorous discussion comparing the parareal algorithm with ParaDiGMS [1] and also completely ignores a more recent reference [2]. It is clear that all these works, including parareal, focus on parallelizing the sampling process via certain forms of fixed-point iteration, although the computing graphs of these algorithms differ. The only discussion in the current manuscript is that ParaDiGMS is highly memory-intensive and must use a sliding window. While this statement is certainly true, I believe it is not a disadvantage of ParaDiGMS and does not imply that ParaDiGMS is inferior to parareal. I do not see why using a sliding window should be considered a disadvantage.
2. As far as I can see, the Eff. Serial Evals in Table 1 and Table 2 should correspond to the Parallel Iters in [1] and Steps in [2], am I correct? Because Eff. Serial Evals essentially represent the number of batch evaluations on the models. If this is the case, based on the current numbers in Table 1, it seems quite inferior to the results in [1] and [2]. Of course, I understand that the batch size may be different in this work, but I still wish the author would provide a detailed discussion on this point.
3. Proposition 1 in the manuscript is somewhat trivial and exists in many existing works. Note that each refinement consists of $\sqrt n$ sequential evaluation, so the parareal will also require at most $\sqrt n \times \sqrt n=n$ sequential evaluation on the models to converge. I think the author should mention that this result is fundamentally equivalent to Proposition 1 in [1], and a more general result is found in Theorem 3.6 in [2]. The key point is that when performing fixed-point iteration for an $n$-order triangular system, it will certainly converge within $n$ steps, and this bound can not be further improved without extra assumption.
[1] Shih, Andy, et al. "Parallel sampling of diffusion models." Advances in Neural Information Processing Systems 36 (2024).
[2] Tang, Zhiwei, et al. "Accelerating parallel sampling of diffusion models." Forty-first International Conference on Machine Learning. 2024.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In my understanding, the technique in Parareal have a different spirit in parallelization with [1][2]. For example, consider that we wish to do n-step sampling with $\sqrt n$ batchsize. Given a initial sequence $[x_1,...,x_n]$, parareal first group them into $g_1=[x_1,...,x_{\sqrt n}],...,g_{\sqrt n}=[x_{n-\sqrt n},x_n]$. The parallelism is that to update $g_1$, ..., $g_n$ in parallel, while the update of each $g_i$ consists of $\sqrt n$ sequential solving. On the other hand, [1][2] consider update $[x_{i},...,x_{i+\sqrt n}]$ in parallel until convergence. It is not clear to me which one is better from the theoretical aspects, as they have the same worst-case convergence bound. I suspect that it should depends on the form of ODE and also the batchsize? I wonder if the authors could provide some thought and discussion on this point?
2. Do you think parareal can be applied to the SDE sampler as [1] and [2] did?
[1] Shih, Andy, et al. "Parallel sampling of diffusion models." Advances in Neural Information Processing Systems 36 (2024).
[2] Tang, Zhiwei, et al. "Accelerating parallel sampling of diffusion models." Forty-first International Conference on Machine Learning. 2024.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See my comments on weaknesses and questions above. This work lacks a correct and thorough discussion comparing it to existing works.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the review, and are excited to hear that the reviewer finds the novel ideas presented to be very interesting, and that the experiments indeed demonstrate the efficacy of this approach! We respond to the concerns below:
> Baselines
Please refer to the combined response to all reviewers. We thank the reviewer for bringing [2] to our attention; we were not aware of this very recent work (ICML 2024) at the time of writing this paper. However, their method uses completely different techniques (namely, formulating and solving a triangular system in a specialized way) to solve the same problem. We have discussed this work in the common response, and will make sure to include a detailed discussion in the final manuscript.
> As far as I can see, the Eff. Serial Evals in Table 1 and Table 2 should correspond to the Parallel Iters in [1] and Steps in [2] ... based on the current numbers in Table 1, it seems quite inferior to the results in [1] and [2]. Of course, I understand that the batch size may be different in this work, but I still wish the author would provide a detailed discussion on this point.
Thank you for highlighting the difference in notation between the papers. You are correct that the Eff. Serial Evaluations correspond to the Parallel Iters in [1] and Steps in [2], where all refer to the number of batched evaluations. We will be sure to clarify this notation in updates to the paper.
The reasons for the seemingly high number of Eff. Serial Iterations is twofold:
- *Stricter convergence criteria for SRDS*: We note that there is a considerable difference in the how the convergence criteria is calculated for ParaDiGMS and SRDS. ParaDiGMS using a per step tolerance proportional to the scheduler’s noise magnitude and aggressively (and irrecoverably) slides the sliding window if an early step has “converged”; this gives very loose guarantees of the final generated image. On the other hand, SRDS only directly considers differences in the final output representation to decide convergence. This strict/conservative threshold for our pixel diffusion results in Table 1 causes larger number of effective serial iteration. One can comfortably relax this threshold and still maintain good sample quality (as seen in the ablation study in Appendix C). Alternatively, as we show for StableDiffusion, one can instead limit the number of iterations and drastically reduce the Eff. Serial Evaluations (comparable to ParaDiGMS) with no measurable degradation in sample quality! In fact, even for pixel-based diffusion models, we can similarly limit the number of SRDS iterations to 1 or 2 without degradation in sample quality (Figure 7 in Appendix C).
- *Batch size*: As the reviewer hinted correctly, ParaDiGMS has a much higher needed batch size. Specifically, for a T step denoising process, ParaDiGMS needs to perform T model evaluations in parallel, while SRDS only requires $\sqrt{T}$ evaluations, which fits comfortably in GPU memory. To combat this using sliding windows, ParaDiGMS is able to arbitrarily increase the compute and create larger sliding windows (in exchange for reducing the number of serial iterations). It is worth noting that the comparison performed for the number of serial evaluations in the ParaDiGMS paper is on a system of 8 A100 80Gb GPUs whereas our experiments are performed on 4 A100 40Gb GPUs. To shed light on this difference in platforms, we also try to provide a more fair comparison in our experiments in the combined reviewer response, and we will add this clarification to the revised manuscript.
> fixed-point iteration
Our rationale to include Proposition 1 in the manuscript, in a similar manner to Proposition 1 in [1] is to highlight the worst case behavior of the SRDS algorithm and provide the guarantee on convergence, despite the triviality. We will be sure to include the cross-references to the appropriate equations in [1] and [2], especially in the larger context of fixed-point iteration for n-order triangular systems.
> different spirit in parallelization with [1][2] ... which one is better from the theoretical aspects
This is a great question! Indeed, SRDS has a different spirit of parallelization compared to [1] and [2]. We like to view it as a multigrid/multiresolution method where we solve for the trajectory at a low resolution and use parallel high-resolution solves to correct the trajectory. This is in contrast to [1] and [2] which have the flavor of more traditional fixed point iteration/updates. From a theoretical point of view, it is actually unclear which is fundamentally better. Even if SRDS might seem slightly superior empirically today, all three (SRDS [1] and [2]) are still the first works in this area of parallel sampling, and it is entirely possible that one of them possesses superior theoretical properties/guarantees that can be exploited. As we briefly discussed in the future work section of the paper, convergence guarantees/properties for parareal have been analyzed only in a handful of special cases such as heat equations, and it would be a very interesting future direction to see if the diffusion ODE admits better convergence guarantees for parareal (in line with what is observed empirically).
> SDE sampler
Yes, as we mention in the paper, SRDS can indeed be readily applied to the SDE sampler. As seen in [1], we can just use the trick of pre-sampling the noise upfront, and the rest of the algorithm remains the same. Notably, it is even possible to ensure that the noise at the coarse and fine-time scales is aligned; however, this is not necessary due to the use of the predictor corrector updates in the parareal algorithm.
---
References:
- [1] Shih, Andy, et al. "Parallel sampling of diffusion models." Advances in Neural Information Processing Systems 36 (2023).
- [2] Tang, Zhiwei, et al. "Accelerating parallel sampling of diffusion models." Forty-first International Conference on Machine Learning. 2024.
---
Rebuttal Comment 1.1:
Title: Thanks for your rebuttal
Comment: After careful consideration, I decided to raise my score to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and for raising the score! We are glad that your concerns have been addressed. | Summary: This paper proposes Self-Refining Diffusion Samplers (SRDS), which draws inspiration from the Parareal algorithm, aims to solve the reverse process accurately without retraining models, balance the tradeoff between sample quality and sampling speed. lower latency for requiring fewer sampling steps to reach convergence. The experiment section shows that the proposed algorithm speeds up the convergence without degrading sample quality in image generation tasks.
Strengths: 1. This paper is motivated by Parareal algorithm to speed up sampling process without lowering sample quality which seems to work well with image generation.
2. This paper provides theoretical analysis for sampling convergence and latency investigation.
3. This paper observes the pipelined SRDS, which further speeds up the sampling process.
Weaknesses: 1. In related work, the authors mentioned that ParaDiGMS proposed by Shih et al [1] is another parallel-in-time integration method, and this paper also works on image generation. In this case, I believe their results could be baselines that this paper should compare against.
[1] Andy Shih, Suneel Belkhale, Stefano Ermon, Dorsa Sadigh, and Nima Anari. Parallel sampling of diffusion models. Advances in Neural Information Processing Systems, 36, 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. For the experiment section, are the numbers intentionally matched in ``FID Score`` in Table 1 and ``CLIP Score`` in Table 2? If that's true, it would be also interesting to see how the performance changes when you have different SRDS Iters and their corresponding Total Evals.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This paper discussed the limitations and pointed out the potential positive or negative social impacts will not be the direct consequence of this work in the checklist guidelines.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the review, and are happy to hear that the experimental results are convincing! We respond to the concerns below:
> ParaDiGMs baseline
Please refer to the combined response to all reviewers, where we clearly demonstrate the superiority of SRDS over ParaDiGMS. For instance, SRDS is up to 38% faster than ParaDiGMS for DDIM on identical hardware even when using very generous convergence thresholds for ParaDiGMS.
> Are the numbers intentionally matched in FID Score in Table 1 and CLIP Score in Table 2
Thanks for the question! Yes, the FID Score in Table 1 and CLIP Score in Table 2 are presented to show a tolerance/SRDS iterations so as to have no measurable degradation in sample quality. In the original version of the manuscript, we have included an ablation study for the number of SRDS iterations and the corresponding CLIP score in Figure 5; showcasing the capability of the model to achieve early convergence. For pixel space diffusion, we provide an ablation of tolerance threshold vs sample quality in Appendix C. We note that we use KID score instead of FID score due to computational budget constraints (KID is an unbiased estimator and requires much fewer samples than FID).
---
Rebuttal Comment 1.1:
Comment: I have read through the responses and decide to increase the score from 5 to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and for raising your score! We are glad to hear that your concerns have been addressed. | Summary: Inspired by the parallel-in-time ODE integration literature, especially Parareal method, this paper introduces SRDS as a fast sampler for diffusion models enabling efficient parallelization. Experimental results demonstrate that the proposed SRDS reduces the number of steps required to synthesize samples.
Strengths: It is interesting to use parallel sampling algorithms to accelerate the sampling of diffusion models, which will benefit practical applications such as real-time image editing.
Weaknesses: My concerns are mostly about the limited experimental results:
1. This paper lacks of clear discussions about the differences between the proposed method and the previous work “23NeurIPS-Parallel sampling of diffusion models.”, and sufficient experimental evidence to demonstrate the effectiveness of the proposed method. Specifically, there is no quantitative comparison with the most related method ParaDiGMs. It seems that the existing method ParaDiGMs can provide better speedup according to their results.
2. This paper lacks of a sufficient comparison to the existing fast samplers (such as Heun, DEIS, DPM-Solver). In practice, we can already synthesize samples with those advanced samplers with only around 10 steps. While this paper only presents experiments on several hundreds or one thousand steps at Table 1, which makes the method proposed in this paper unappealing.
3. It is claimed in Lines 38-40 and 196-198 that SRDS outperforms ParaDiGMs in terms of memory requirement. However, there is also no quantitative support from experimental results.
4. It is claimed in Line 62-64 that SRDS is compatible with off-the-shelf solvers. However, the speedup is only shown on DDIM. The speedup offered by SRDS is only explicitly shown in Table 2, making it hard to fully assess the effectiveness of the proposed method.
5. There is no quantitative comparison between SRDS and pipelined SRDS.
Technical Quality: 2
Clarity: 1
Questions for Authors: 1. Could the pipelined SRDS be easily deployed in practice?
2. Why the SRDS Iters shown in Table 1 are not integers?
3. Why the first and second rows of Figure 6 are almost the same?
4. Minor points:
(1) In line 10 of Algorithm 1, $x_{i-1}^p$ is undefined for p == 1.
(2) It is redundant to use Figure 2, Figure 3b and Algorithm 1 to describe the same thing in the main text. They together occupy a whole page. Adding more experiments to support the effectiveness of the proposed method is more valuable.
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Please refer to the weakness and question sections above. This manuscript needs an overhaul to make it meet the bar of a top conference.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the review, and appreciate that they recognize the practical applicability of our work! We respond to the concerns below:
> ParaDiGMs baseline
Please refer to the combined response to all reviewers, where we clearly demonstrate the superiority of SRDS over ParaDiGMS. For instance, SRDS is up to 38% faster than ParaDiGMS for DDIM on identical hardware even when using very generous convergence thresholds for ParaDiGMS.
> Comparison to existing solvers
We'd like to re-emphasize that SRDS provides an orthogonal improvement compared to the other lines of research on accelerating diffusion model sampling. In particular, while the main experiments (and writing) were focused on DDIM, SRDS is compatible with other solvers and they can be readily incorporated into SRDS to speed up diffusion sampling. We have empirically also evaluated that the SRDS method can be used with other solvers, such as DDPM (often requiring more steps than DDIM) and DPMSolver (often requiring fewer steps than DDIM). For both methods, SRDS maintains a speedup over the corresponding baseline serial implementation (~3.6x for 1000-step DDPM and ~1.5x for 25-step DPMSolver ). Please see the combined response for a greater discussion of this extension.
> SRDS outperforms ParaDiGMs in terms of memory requirement
Apologies for the confusion. Yes, indeed ParaDiGMs has a higher memory requirement. Specifically, for a T step denoising process, ParaDiGMS needs to perform $T$ model evaluations in parallel with subsequent computations needing information about all previous evaluations, while SRDS only requires $\sqrt{T}$ parallel evaluations (which fits comfortably in GPU memory) and requires much lesser communication between GPUs. While the prohibitively large memory requirement can be combated with a sliding window method, the significantly larger communication overhead remains because at every step of Paradigms, an AllReduce over all devices must be performed in order to calculate updates to the sliding window. (For instance, even when ParaDiGMS reduces Eff. Serial Steps by 20x, the obtained speedup is only 3.4x). This is in contrast to the independent fine-solves in parareal that only need to transfer information for the coarse solve.
Below, we demonstrate how the minimal memory and communication overhead of SRDS shines through as we are able to achieve better device utilization as we increase the number of available GPUs. The following experiment was performed on 40GB A100s and used a generous 1e-2 threshold for ParaDiGMS.
| | | Serial | SRDS | SRDS | ParaDiGMS | ParaDiGMS |
|---|---|:---:|:---:|:---:|:---:|:---:|
| | Devices | Model Evals | Eff Serial Evals | Time Per Sample | Eff. Serial Evals | Time Per Sample |
| DDIM | 1 | 25 | 15 | **1.62** | 16 | 2.71 |
| DDIM | 2 | 25 | 15 | **1.08** | 16 | 2.01 |
| DDIM | 4 | 25 | 15 | **0.82** | 16 | 1.51 |
In the table above, we can observe how SRDS has a significantly better time per sample than ParaDiGMS despite having a similar number of effective serial evaluations.
> Pipelined SRDS
We compare SRDS and pipelined SRDS in the overall response to all reviewers. We hope this provides more context and strengthens the empirical evaluation!
> Could the pipelined SRDS be easily deployed in practice?
Yes, the pipelined SRDS can be easily deployed in practice, though it does take minor adjustments to common diffusion modeling code in order to be efficiently implemented. For example, the StableDiffusion pipeline makes use of a scheduler that ahead-of-time sets the number of inference steps (as a class variable). Adjusting this parameter to be modified at runtime is required for pipelining. Additionally, a framework is required to coordinate computation and data transfers; our implementations make use of torch.multiprocessing and the queue organization to coordinate the launches of the different nodes.
While extracting close to optimal bonus speedup (2x) from pipelining might require considerable engineering effort, we implemented a suboptimal version of pipelined SRDS and already achieve substantial speedups compared to non-pipelined SRDS and more importantly baselines such as ParaDiGMS. Please refer to the combined response for quantitative results on the same.
> Why the SRDS Iters shown in Table 1 are not integers?
In Table 1, for pixel-based diffusion, we used a very tight/conservative threshold on the convergence of the output in order to define early convergence. We then measure the average number of SRDS iterations required until convergence across 5000 samples, which ends up not necessarily being an integer. This contrasts from Table 2 (Stable Diffusion) which caps the number of iterations with no measurable degradation in sample quality. We will make this more clear in the revised manuscript text, as well as add an equivalent version of Table 1 that includes a capped number of iterations in order to obtain a fairer comparison to baselines (e.g. ParaDiGMS) and demonstrates much greater speedups without loss of sample quality. In fact, Figure 7 in Appendix C already provides support for the fact that even for pixel-based diffusion models, we can similarly limit the number of SRDS iterations to 1 or 2 without degradation in sample quality, while achieving much greater speedups than reported in Table 1.
> Why the first and second rows of Figure 6 are almost the same?
The first and second rows of Figure 6 seek to highlight the early convergence of the SRDS algorithm to the output of sequential sampling. For a visualization of the ‘iterative refinement’ through rounds of parareal iterations, we direct the reviewer to Figure 1. | Summary: This work proposes SRDS, a sampler for diffusion models that applies the Parareal algorithm, to reduce the overall sampling latency by introducing extra but parallelizable network evaluations compared to the fully sequential fashion. With higher device utilization or device parallelism through batched inference and pipelining, SRDS successfully reduces the overall sampling latency of existing diffusion models without sacrificing the sample quality.
Strengths: - This paper is well written.
- The proposed SRDS sampling pipeline is flexible and extendable, providing a neat baseline and rich directions for future technical improvements.
- The experiment successfully demonstrates SRDS's practical value of reducing the sampling latency compared to full sequential sampling.
- The ablation studies provide a nice practical guidance for SRDS.
Weaknesses: - As the authors addressed in the limitation part, SRDS relies on parallelizing extra computations in exchange for a reduction in total latency, which may not be applicable to some scenario that performs full batch sequential sampling. However, I believe SRDS could still work in a fairly broad range of cases.
- The important ParaDiGMs baseline is not compared in the experiment part.
### Minor
- The meaning of the abbreviation "IVP" in line `1:` of Algorithm.1 is unclear.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In the current SRDS pipeline, the intermediate results of the fine-grained solver are only used once. Do you have any thoughts on utilizing them in subsequent steps for better sample quality or faster convergence?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the review and are glad to hear the reviewer’s agreement of the broad applicability of SRDS! We respond to the concerns below:
> ParaDiGMs baseline
Please refer to the combined response to all reviewers, where we clearly demonstrate the superiority of SRDS over ParaDiGMS. For instance, SRDS is up to 38% faster than ParaDiGMS for DDIM on identical hardware even when using very generous convergence thresholds for ParaDiGMS. We hope that addition of these results, as well as the comparisons between different samplers, helps provide more context to the impact of SRDS in diffusion model sampling.
> In the current SRDS pipeline, the intermediate results of the fine-grained solver are only used once. Do you have any thoughts on utilizing them in subsequent steps for better sample quality or faster convergence?
We appreciate the question about only using the intermediate results of the fine-grained solver once, as it highlights that there is a rich body of work in parallel-in-time integration methods that may also be applicable to parallelizing the sampling of diffusion models. In particular, the intermediate re-use idea appears similar to PITA [1]; however, the inversion required in the calculation of the projection matrix makes it more complicated and less amenable to the high-dimensional outputs common to diffusion models. There may be room for such methods in latent diffusion; though have not yet explored such an approach due to the complexity.
There may be more room for simpler ideas in diffusion sampling, as the approximation to the score function may be inaccurate and simple averaging in re-use may provide a better approximation of the true score. These have not been explored as the main goal of the paper was to ensure that parallelized sampling was finding the solution to the ODE defined by the network. This would certainly be an interesting future direction to explore theoretically and empirically as an extension to SRDS!
> The meaning of the abbreviation "IVP" in line 1: of Algorithm.1 is unclear.
Thanks for pointing this out. We will update the manuscript to clarify that it refers to ‘Initial Value Problem’.
---
References:
- [1] C. Farhat and M. Chandesris, “Time-decomposed parallel time-integrators: theory and feasibility studies for fluid, structure, and fluid-structure applications,” International Journal for Numerical Methods in Engineering, vol. 58, no. 9, 2003, doi: 10.1002/nme.860. [Online].
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the rebuttal, especially the additional results. I will keep my current recommendation unchanged.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response! We are pleased to hear that your concerns have been addressed. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful reviews! We are thrilled to hear that the reviewers find that our paper is “well written and easy to follow”, “demonstrates a very interesting direction”, provides “a neat baseline and rich directions for future technical improvements”, and “will benefit practical applications such as real-time image editing”.
Here, we answer common questions:
- Empirical comparison to ParaDiGMS baseline: Up to 38% faster than ParaDiGMS for DDIM on identical hardware even when using very generous convergence thresholds for ParaDiGMS; even faster when using stricter thresholds!
- Quantitative evaluation of pipelined SRDS: Up to 20% faster than non-pipelined SRDS even with a suboptimal implementation!
- Incorporation of other samplers: SRDS with DDPM and DPMSolver yield similar speedups of up to ~3.6x over sequential!
Other questions are answered in individual responses to reviewers. Further, we’ll include these results/clarifications in our revised manuscript.
# Comparison to Baselines
We demonstrate the superiority of SRDS to baselines ParaDiGMS [1] and ParaTAA [2]. We especially thank reviewer KhBd for bringing [2] to our attention. [2] takes a different approach to accelerating parallel sampling (solving triangular nonlinear systems using special techniques) and we'll include a thorough discussion of it in the final manuscript. Unfortunately, as this is a new paper (ICML 2024), we were unaware of it while writing our paper, and consequently we're unable to provide a comprehensive empirical comparison with [2] with identical hardware at the moment. Nonetheless, for now, we demonstrate the high-level superiority of SRDS solely by using the results published by the authors in [1] (Table 5) and [2] (Table 1).
In the table below, we show that SRDS offers better wall-clock speedups (over sequential) in sample generation time for StableDiffusion when compared to [1] and [2]. We clarify that the reported speedup for each method is w.r.t sequential solve on the same machine that the corresponding parallel method was evaluated. Our results below are particularly impressive given that the authors of [1] used 8x 80GB A100s for the evaluation and the authors of [2] used 8x 80GB A800 for the same, while we (SRDS) only used 4x 40GB A100 for the evaluation due to computational constraints. (When interpreting, recall that a sequential solve isn't compute/memory bound and doesn’t benefit significantly from more GPU compute, whereas the parallel methods certainly do!) We'd also like to highlight the superiority of SRDS over the baselines in the regime of small number of denoising steps (25) as particularly impactful.
||Denoising Steps|ParaDiGMS|ParaTAA|Pipelined SRDS|
|---|:---:|:---:|:---:|:---:|
|DDIM|100|2.5x|1.92x|**2.73x**|
|DDIM|25|1.0x|1.17x|**1.72x**|
For the main baseline ParaDiGMS that we consider in our paper, we also performed more extensive evaluation to evaluate both methods on equal hardware to more clearly demonstrate the benefits of SRDS. Below, we demonstrate that SRDS consistently beats ParaDiGMS on wallclock speedups. Though the ParaDiGMS paper uses a convergence threshold of 1e-3, we show that SRDS can provide impressive speedups even when compared to significantly relaxed ParaDiGMS thresholds of 1e-1. The following StableDiffusion experiments are performed on identical machines (4 40GB A100 GPUs) for a fair comparison.
||Serial|Serial|Pipelined SRDS|ParaDiGMS|ParaDiGMS|
|---|:---:|:---:|:---:|:---:|:---:|
||Model Evals|Time Per Sample|Time Per Sample|Threshold|Time Per Sample|
|DDIM|961|44.88|**10.31 (4.3x)**|1e-3|275.29|
|||||1e-2|20.48|
|||||1e-1|14.30|
|DDIM|196|9.17|**2.85 (3.2x)**|1e-3|29.45|
|||||1e-2|5.08|
|||||1e-1|3.42|
|DDIM|25|1.18|**0.69 (1.7x)**|1e-3|1.98|
|||||1e-2|1.51|
|||||1e-1|0.77|
# Pipelined SRDS
We demonstrate that pipelining can indeed further speed up SRDS. We implemented a slightly suboptimal version of pipelined SRDS for StableDiffusion below and already observe significant speedups; with some more engineering effort, we can further push towards extracting the full potential of pipelining. However, we believe that because this already beats the baselines, this sufficiently demonstrates the benefits of SRDS.
Our implementation is suboptimal due to using a single device for coordinating pipeline parallelism and device transfers (artifact from torch.multiprocessing). A better approach would use ring-like communication between devices instead of relying on a coordinator.
||Serial|SRDS|SRDS|Pipelined SRDS|Pipelined SRDS|
|---|:---:|:---:|:---:|:---:|:---:|
||Model Evals|Eff Serial Evals|Time Per Sample|Eff Serial Evals|Time Per Sample|
|DDIM|961|93|12.30|63|**10.31**|
|DDIM|196|42|3.30|27|**2.85**|
|DDIM|25|15|0.82|9|**0.69**|
# Incorporation of Other Solvers
As stated in our paper, we'd like to emphasize that SRDS provides an orthogonal improvement compared to the other lines of research on accelerating diffusion model sampling. In particular, while the main experiments (and writing) were focused on DDIM, SRDS is compatible with other solvers and they can be readily incorporated into SRDS to speed up diffusion sampling. For example, SRDS is directly compatible with other solvers such as DDPM (often requiring more steps than DDIM) and DPMSolver (often requiring fewer steps than DDIM) and can efficiently accelerate sampling in both cases, as shown with StableDiffusion below.
||Sequential|Sequential|SRDS|SRDS|SRDS|
|---|:---:|:---:|:---:|:---:|:---:|
||Model Evals|Time Per Sample|Eff Serial Evals|Time Per Sample|Speedup|
|DDPM|961|44.68|93|12.30|**3.63x**|
|DDPM|196|9.03|42|3.26|**2.76x**|
|DPMSolver|196|10.30|42|3.49|**2.95x**|
|DPMSolver|25|1.31|15|0.88|**1.48x**|
|DDIM|196|9.17|42|3.30|**2.77x**|
|DDIM|25|1.18|15|0.82|**1.43x**|
We also highlight that our Diffusers-compatible implementation needs only minor changes to solver arguments, indicating that SRDS can likely be easily extended to future community-developed methods. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper presents a new approach to speed up (improve latency) the generation of samples from diffusion models. The approach is orthogonal to many other approaches present in the literature for the same task. It leverages Parareal algorithm for the task by getting a quick coarse approximate of the sample and then refining it iteratively in parallel, thus reducing the latency while maintaining sample quality. The authors present results for pre-trained pixel space diffusion models and Stable Diffusion and find upto 2x speed ups for the latter.
Strengths: Improving the latency of diffusion models is an active and important area of research. The paper presents a new strategy that relies on leveraging parallel computation of modern hardware. This approach can in theory be combined with other strategies to lead to further improvement, and does not require re-training. The presented approach seems to have guarantees on the quality of the solution and some control on trading off the speed and quality. There is also thought given to efficient batching and pipelining tasks for further potential improvements, which is good. Overall, the paper is also well written and easy to follow.
Weaknesses: The main weakness of the paper is lack of strong and diverse results. The authors only test one diffusion model task. Furthermore the best speed up is only 2.3x, while another experiment results in 0.7x speed up which is concerning.
The authors argue on theoretical grounds that this approach can be combined with other approaches to reduce latency of diffusion models. This is fair, but they do not show any empirical results around it.
Finally, the approach relies completely on leveraging parallel refinement and the actual number of model evaluations are much larger (often by a factor of 3x) which can limit the applicability in some cases.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The authors claim that the worst case sampling latency is no worse than generating a sample through sequential sampling. Yet one of their results (last on in Table 2) has speed up of 0.7x. Is this all due to GPU overhead?
- Is is possible to present empirical results with pipeline parallelism? It seems like the authors expect 2x gains with it, but I assume it will require some computational overhead and based on the previous question, the degradation can be non-trivial.
- While I understand the claim regarding this approach being orthogonal to other works reducing, showing some experiments around the same will make for a stronger paper.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: There are a few limitations discussed towards the end of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the review, and appreciate that they find the paper well written and easy to follow! We respond to the remaining concerns below:
> The authors argue on theoretical grounds that this approach can be combined with other approaches to reduce latency of diffusion models. This is fair, but they do not show any empirical results around it.
In order to empirically demonstrate that the SRDS method can be used in conjunction with other approaches to reduce the latency of diffusion model sampling, we have extended beyond the base DDIM solver primarily used in the paper. We tested SRDS both in conjunction with DDPM (which takes relatively more steps than DDIM) and DPMSolver (which can reduce the number of steps beyond DDIM). For both methods, SRDS maintains a speedup over the corresponding baseline serial implementation (~3.6x for 1000-step DDPM and ~2.9x for 200-step DPMSolver ). In this way, we show that SRDS for example could be combined with the improvement from DDPM → DDIM or DDIM → DPMSolver in order to orthogonally improve latencies. Please refer to the combined response to all reviewers for more details on the same.
> The approach relies completely on leveraging parallel refinement and the actual number of model evaluations are much larger (often by a factor of 3x) which can limit the applicability in some cases.
Yes, indeed there is a tradeoff: as we note in the introduction section, SRDS provides speedups in sampling at the cost of potentially additional parallel compute. While this might limit applicability in some cases (such as compute-bound, batched inference workloads), we would like to emphasize that this tradeoff enables the diffusion models for many other use cases such as real-time image or music editing and trajectory planning in robotics. In fact, there is an increasing trend of diffusion models run locally where small enough models or strong enough hardware can support the batching described in this paper. In such scenarios, strict latency requirements may justify the tradeoff of additional compute vs wall-clock times. Moreover, in a number of cases, we have found that SRDS provides reasonable predictions within a single Parareal iteration; here, the total number of model evaluations is only slightly larger than the serial approach (increasing from $n$ to $ n + 2\sqrt{n}$). Lastly, it is also worth noting that many users are often agnostic to inference time GPU compute costs as they are orders of magnitude lower than training compute costs anyway.
> Yet one of their results (last on in Table 2) has speed up of 0.7x. Is this all due to GPU overhead?
No, while there is certainly some loss due to GPU overhead, the <1.0x speedup is primarily due to the lack of pipeline parallelism (meaning that coarse and fine solves are performed serially rather than in parallel). Note that the worst case sampling latency being no worse than sequential sampling only holds for the pipelined version. Without pipelining, in the worst case, the speed up is 0.5x, and the 0.7x speaks to the early termination/convergence of SRDS. Please refer to Figure 4 for an illustration that clarifies this.
> Is is possible to present empirical results with pipeline parallelism? It seems like the authors expect 2x gains with it, but I assume it will require some computational overhead and based on the previous question, the degradation can be non-trivial.
Please refer to the combined response to all reviewers for a broad discussion. While achieving close to 2x bonus gains with pipeline parallelism requires considerable engineering effort, we implemented a somewhat suboptimal version of pipelined SRDS, and this already showcases significant speedups compared to both non-pipelined SRDS (~20% speedup) and more importantly baselines such as ParaDiGMS (up to 38% faster than ParaDiGMS for DDIM on identical hardware even when using very generous convergence thresholds for ParaDiGMS; even faster when using stricter thresholds). The less than optimal speedups stems from GPU and scheduling overhead.
> While I understand the claim regarding this approach being orthogonal to other works reducing, showing some experiments around the same will make for a stronger paper.
We hope the discussion/experiments in the common response on readily incorporating other solvers such as DDPM and DPMSolver into SRDS helps make this a stronger paper and provides context to how SRDS can be used in conjunction with other improvements in diffusion modeling!
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I thank the authors for their responses and new experiments. I am satisfied with their rebuttal and will maintain my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response! We are glad to hear that your concerns have been addressed. | null | null | null | null | null | null |
Absorb & Escape: Overcoming Single Model Limitations in Generating Heterogeneous Genomic Sequences | Accept (poster) | Summary: The authors propose to generate DNA sequences using pre-trained DMs and then modifying the segments (randomly selected), through autoregressive models.
Strengths: The problem that the authors are trying to address, i.e. heterogeneity of the sequences due to existing multiple different element is valid question in drug discovery, specifically for very long DNA sequences.
Weaknesses: The main weakness of the paper is that the authors didn't validate the proposed approach in real world long sequences, and the simulated data is not convincing. Apart from that it is not clear how they train the baselines for multiple tissues, and the fact that the authors generalized the claims about all the DMs but only consider a version that they proposed by themself is somehow making the claims not acceptable in some sense.
Technical Quality: 2
Clarity: 2
Questions for Authors: - How did you train the model for DDSM in the case of multiple species? How did you incorporate species information?
- How do the results look in the same dataset that DDSM and DFM used for evaluation, specifically in promoter design and enhancer design in a single species? It is unclear how the proposed DMs function initially in such scenarios.
- What are the benefits of the proposed DMs and why can't one use previous pre-trained DMs, such as DDSM?
- What is the effect of absorption and escape in unconditional generation?
- How did you choose the length of segments in real-world and synthetic cases?
- What is the effect of the segment selection?
- Why one cannot use evolutionary based models for each segment? At least as a baseline.
- Can you show couple of generated samples for each method and showing how the segments look like in real and generated data?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The main limitation of the paper is that it is very dependent to the pre-trained models, as well as selections of segments in a write manner.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciated for your feedback. We have provided additional experiment results in response to your questions, this includes applying A\&E algorithm on the Dirichlet Flow Matching in the new task and answers to common questions from other reviewers. In the following, we respond to your questions about this paper.
### Q1. A&E performance on the DDSM task
We ran **A\&E** on this task with a default threshold $T_{\text{absorb}}=0.85$. The same dataloader and random seed as the [DFM repository](https://github.com/HannesStark/dirichlet-flow-matching?tab=readme-ov-file) were used. The **A\&E** model comprises AR and DM components, specifically the **language model** and **distilled DFM** checkpoints provided in the DFM repository. As shown in the table below, **A\&E** achieved state-of-the-art results of **0.0262** on the test split, despite using the second best DM, **distilled DFM**.
| Method | MSE $\downarrow$ |
|-------------------------------------|-------|
| Bit Diffusion (bit-encoding) | 0.0414 |
| Bit Diffusion (one-hot encoding)| 0.0395 |
| D3PM-uniform | 0.0375 |
| DDSM | 0.0334 |
| Language Model | 0.0333 |
| Linear FM | 0.0281 |
| Dirichlet FM | 0.0269 |
| Dirichlet FM distilled | 0.0278 |
| **A\&E** (Language Model+Dirichlet FM distilled) | **0.0262** |
| |. |
### Q2. Validate the proposed approach in real world long sequences
The evaluation in Section 5 is based on real-world DNA sequences from the EPD database. For details on the dataset construction process, please refer to the global response. The only simulated data used is in Section 3, which illustrates the limitations of AR models and DM in handling heterogeneous data. Additionally, we have provided results on the performance of A&E in the *transcription profile conditioned promoter sequence design task* from the DDSM paper (see global response and Answer to Q3 below).
### Q3. How did you train the model for DDSM in the case of multiple species?
We did not train DDSM for multi-species conditioning. The DDSM is evaluated in an unconditional setting in Section 5.1, where the training data are promoter sequences from {human, rat, Macaca mulatta, mouse}.
### Q4. The benefits of the proposed DMs and why can’t one use previous pre-trained DMs, such as DDSM?
The Latent Diffusion Model (LDM) proposed in this paper serves as a baseline model. We found that this simple baseline outperformed existing models such as DDSM in the unconditional generation scenario (shown in Table 3 of Section 5.2). Our aim is not to prove that the proposed baseline LDM outperforms other discrete DMs. Instead, the focus of this paper is to demonstrate the effectiveness of the proposed sampling algorithm A&E by showing that the composed model can outperform single models.
### Q5. What is the effect of absorption and escape in unconditional generation?
We have included additional results below. As you can see, the A&E algorithm achieves the best performance in unconditional generation.
|Model|EPD(256bp)|||EPD(2048bp)|||
|-------------------------------------|--------------|----------|-----------|---------------|---------|-----------|
|Model|S-FID↓|Cor_TATA↑|MSE_TATA↓|S-FID↓|Cor_TATA↑|MSE_TATA↓|
|VAE|295.0|-0.167|26.5|250.0|0.007|9.40|
|BitDiffusion|405|0.058|5.29|100.0|0.066|5.91|
|D3PM(small)|_97.4_|0.0964|4.97|_94.5_|0.363|1.50|
|D3PM(large)|161.0|-0.208|_4.75_|224.0|0.307|8.49|
|DDSM(TimeDilation)|504.0|_0.897_|13.4|1113.0|_0.839_|2673.7|
|DiscDiff(Ours)|57.4|0.973|0.669|45.2|0.858|_1.74_|
|A&E(Ours)|**3.21**|**0.975**|**0.379**|**4.38**|**0.892**|**0.528**|
### Q6 How did you choose the length of segments in real-world and synthetic cases?
Algorithm 1 requires segment annotations (start and end positions of each segment) to be available. However, this information is not available in most use cases. To overcome this issue, Algorithm 2 is proposed to run without knowing the true segment annotations, as detailed in line 193, Section 4.2. In both cases, the user does not need to choose the segment lengths.
For synthetic data used in the toy example, we already know the segment annotations by construction.
### Q7 What is the effect of the segment selection.
Similar to Q6, both algorithm 1 and algorithm 2 don't need to select segment. For line 5 in alg1, the segments could be sampled randomly.
### Q8 Why one cannot use evolutionary based models for each segment?
In practical DNA generation, where the user only has DNA sequences without annotations, Algorithm 2 handles this scenario. In addition, conventional promoter design methods based on existing promoter libraries span only a short section of the promoter.
### Q9 Examples of how the segments look like in real and generated data
The examples of generated sequences are available from the following [Anonymous repo](https://anonymous.4open.science/r/dna_examples-3248/). Note that no segment annotations are available for the training data; we apply Algorithm 2 for generation. | Summary: This paper addresses limitations in existing methods for generating genomic sequences, which struggle to capture the heterogeneous nature of DNA. The authors propose a new framework called Absorb & Escape (A&E) that combines the strengths of autoregressive (AR) models and diffusion models (DMs) to generate more realistic DNA sequences. They first analyze the shortcomings of AR models and DMs when used individually for heterogeneous sequence generation. Then they introduce A&E, which alternates between refining segments using an AR model (Absorb step) and updating the full sequence (Escape step). A practical implementation called Fast A&E is also presented. The authors evaluate their method on a large dataset of promoter sequences from 15 species. They compare Fast A&E against state-of-the-art baselines on metrics like motif distribution similarity, sequence diversity, and functional properties when inserted into genomes.
Strengths: - Absorb & Escape is a plausible solution to addressing the limitations of autoregressive models and diffusion models
- theoretical analysis of their method -- convergence proof
- comparisons against previous diffusion models on promoter generation
- comparison across multiple species DNA generation
- deploys various metrics to assess generated sequences
- new training data that considers multispecies
Weaknesses: - If trained on original DDSM dataset, how would DiscDiff perform? I ask because the DDSM seems to perform quite poorly in terms of MSE in Table 3, but its performance was reasonable even when compared with Dirichlet Flow Matching (Stark et al, arxiv 2023).
- The explanation of Autoregressive model struggling to capture difference between p1 and p2 with a single set of parameters could be true, but hierarchical AR models coUnuld learn latent segment states. So, unclear why deep AR models not learning the heterogeneous structure is challenging in practice.
- Regulatory sequences can be quite diverged across species where a promoter in one species may not be a strong promoter in another species. So, what is the benefit of multi-species promoter pretraining? Perhaps comparing vs just on human promoters (i.e. DDSM dataset using CAGE-seq) could help appreciate the nuances.
- In 5.1, under baseline models, DNADiffusion is cited but not explored in Table 3. HyenaDNA is not benchmarked in Table 3. It would be good to get a genomic language model baseline since the purpose of DiscDiff is to resolve issues of diffusion models and autoregressive models.
- It's not clear how metrics were calculated. How were motif distributions calculated? ALso, what about MSE of the conditional generation of the model (using Sei as an oracle). THis was done in the DDSM paper as well as DFM.
- Dirichlet flow matching performed better than DDSM and so it would be worth benchmarking on this (but may be outside scope for this paper). At least mention the new diffusion models that beat out DDSM and the open question remains whether they require absorb and escape or whether they can also benefit.
- Fig 3 doesn't show any comparisons with other diffusion models (eg. DDSM)
- Table 5 should show distribution of enformers predictions to see if generated distribution are even close to natural sequences. The presented metrics are difficult to assess beyond relative comparisons.
Technical Quality: 4
Clarity: 4
Questions for Authors: Questions are integrated within Weaknesses (above).
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: While the method is very interesting, potentially bridging the limitations of autoregressive models and diffusion models, the evaluation is a bit on the weak side. This makes it difficult to assess any advance. Better evaluations could help improve this work to demonstrate the true gains. It would also be worthwile to explore absorb & escape using other SOTA diffusion models (eg. DFM).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback. Following your suggestions, we have provided additional results comparing A&E with DDSM and DFM in the global response, along with clarifications to address common questions from other reviewers. Below, we provide detailed responses to each of your questions.
### Q1. If trained on the original DDSM dataset, how would DiscDiff perform?
We believe the DDSM task is easier than unconditional and class-conditioning generation. The DDSM task uses a transcription profile, a real-value array of the same length as the DNA sequence, as the condition, allowing the generative model to learn a direct mapping from condition to sequence.
This is evident when applying the DDSM model to the unconditional generation task proposed in this paper during the exploration stage. Initially, using the original score net design from DDSM for unconditional generation, it could not learn the training set distribution. We scaled up the original score net by 10 times, which allowed DDSM to perform reasonably in unconditional sequence generation.
Due to time constraints, we could not adapt the DiscDiff architecture to the DDSM task. However, we provide additional experiments with the A&E algorithm on the DDSM task by using the pretrained language model as the AR component and the DFM distilled model as the DM component. As shown below, A&E achieved a new state-of-the-art on this task.
| Method | MSE $\downarrow$ |
|-------------------------------------|-------|
| Bit Diffusion (bit-encoding)| 0.0414 |
| Bit Diffusion (one-hot encoding)| 0.0395 |
| D3PM-uniform | 0.0375 |
| DDSM | 0.0334 |
| Language Model | 0.0333 |
| Linear FM| 0.0281 |
| Dirichlet FM| 0.0269 |
| Dirichlet FM distilled | 0.0278 |
| **A\&E** (Language Model+Dirichlet FM distilled)| **0.0262** |
### Q2. Why deep AR models do not learn the heterogeneous structure
As detailed in line 120 section 3 of the original paper, given the assumption that a sequence consists of independent segments $seg_1$ and $seg_2$, AR models struggles to learn the independence between two elements $x_k \in seg_1$ and $x_{2k} \in seg_2$. This issue would be more challenging when the training data is insufficient. As AR models have intrinsic assumptions about token dependencies, overcoming this challenge could be difficult.
### Q3. Benefit of multi-species promoter pretraining and Additional Experiment on DDSM dataset
We can train a single model for each species for promoter generation. However, for certain species (e.g., P. falciparum in EPD), we have limited data, which is insufficient for training a generative model. By training on cross-species data, our model leverages genomic similarities across different species, enabling promoter sequence generation for species with limited data. We expect future foundation models for genomics to make use of all available data, just like how LLMs are making use of the web-scale data, and being able to perform downstream tasks with contextual prompts.
See the response to Q1 for the additional exp. results.
### Q4. DNADiffusion is cited but not explored in Table 3
We thank the reviewer for pointing it out. This is a typo. We mislabeled DNADiffusion as BitDiffusion in Table 3. We ran the training code from the DNADiffusion repository and benchmarked it on the EPD datasets. The details of the experiments are shown in Appendix D.
### Q5 a) How motifs distribution were calculated
We use [EPDnew analysis tools - motif distribution](https://epd.expasy.org/epd/EPDnew_study.php) for plotting the motif distribution and get the motif distribution data.
### Q5 b) How MSE is calculated in conditional generation
We clarify the definition of metric in the final section of global response. Note that Sei model is used for evaluating unconditional generation as Sei is for human genome, For conditional generation, the MSE is the MSE between the **motif distribution** of the generated sequence and nature sequence. As detailed in line 250 section 5.2.
### Q6. Benchmarking against DFM
See the response to Q1.
### Q7. Fig 3 doesn’t show any comparisons with other diffusion models (e.g., DDSM)
One reason is that training on the whole datasets is expensive. We first select the best-performing DM and AR components with unconditional generation in section 5.1 of the original paper. Secondly, section 5.2 demonstrates the effectiveness of the A&E algorithm by showing A&E outperforms the single model.
### Q8. Distribution of enformers predictions
We are happy to include the single track expression level prediction results in the appendix of final draft, but due to the limitation of one page PDF, we don’t have additional space to include it here.
---
Rebuttal Comment 1.1:
Title: Satisfactory response
Comment: I thank the authors for their clarifications and additional experiments. I believe this is now a solid contribution and therefore will maintain my score of accept. | Summary: The authors introduce a novel approach, called Absorb & Escape (A&E), for generating DNA sequences by combining the strengths of autoregressive (AR) and diffusion models (DMs). The authors argue that existing single-model approaches struggle with the heterogeneous nature of genomic sequences, which consist of multiple functionally distinct regions. Their method initially generates a sequencing using a DM, and then refines sequence segments using an AR model. They evaluate their approach on various design tasks. The authors claim improved performance over existing methods, particularly in generating sequences that satisfy complex constraints and exhibit properties similar to natural genomic sequences.
Strengths: The combination of AR and DM is innovative, and these approaches do seem to excel and struggle at different aspects of the problem.
Biological sequence design is generally an interesting research area with potentially valuable applications.
Weaknesses: The biological application lacks compelling justification. The authors do not clearly explain why generating synthetic promoter sequences is necessary or advantageous compared to using and optimizing existing genomic promoters. A comparison with established promoter optimization methods is missing, which is crucial for demonstrating the practical value of this approach in genomic applications.
The paper suggests broader applicability of the A&E method beyond DNA design, but this is not sufficiently supported by the presented experiments. Additional tasks and datasets would be necessary to make a convincing case for the method's general effectiveness.
The paper lacks an analysis of how the various hyperparameters, such as the Absorb Threshold (T_Absorb), affect the model's performance. A sensitivity analysis would provide valuable insights into the robustness of the method.
The use of Sum of Squared Errors (SSE) to evaluate generated promoters against natural promoters is questionable. Given that Enformer makes predictions for a large region of sequence (896 consecutive 128 bp bins), this metric may not accurately reflect the quality or functionality of the generated promoters, which cover 1-2 bins.
Technical Quality: 2
Clarity: 2
Questions for Authors: Algorithm 2 assumes access to token-level probabilities from the diffusion model (p_DM), which is not a standard output for most diffusion models. The authors do not explain how they obtain these probabilities, which is a critical missing piece of information for understanding and implementing their method.
There is insufficient exploration of how the model learns to differentiate between different genomic regions (e.g., coding sequences, promoters, introns). Does the AR model produce different generations in these different regions? This analysis would strengthen the claim that the method effectively handles heterogeneous sequences.
The description of the Eukaryotic Promoter Database (EPD) content is vague. E.g. why would a promoter database contain coding sequence. A more specific explanation of the types of sequences it contains would help readers understand the dataset's composition and relevance.
In Algorithm 2 line 3, I cannot determine exactly what p^DM refers to. I.e. is this a global probability of the entire sequence or specific to a local region. It’s presence within the line 2 loop suggests locality.
The convergence criteria for Algorithm 1 are not specified, which makes it difficult to assess the algorithm.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback on the paper. In our general response, we included additional results comparing A&E with other methods on additional datasets, a sensitivity analysis of the hyperparameter $T_{absorb}$, an explanation of how to define token-level probability from the diffusion model, and other common questions proposed by reviewers. We suggest reviewing the global response first. Below, we provide detailed responses to each of your questions.
### Q1. Sensitivity Analysis over $T_{absorb}$
Figure 23 of the global response provides a sensitivity analysis of the A&E algorithm over $T_{absorb}$. As $T_{absorb}$ increases, the motif plot correlation between generated DNA and natural DNA first increases and then flattens. A larger $T_{absorb}$ encourages more frequent corrections by the AR model, improving the quality of generated sequences. However, a smaller $T_{absorb}$ is more computationally efficient as it requires fewer function evaluations. A&E is robust over a wide range of $T_{absorb}$. While it is best to use the validation dataset to choose the optimal value, we found that a value of 0.85 is generally appropriate for different tasks and scenarios.
### Q2. Additional Task and Dataset
Following your suggestion, we have compared **A&E** with other SoTA discrete generation algorithm on the *transcription profile conditioned promoter sequence design task* from DDSM paper. As shown in Table 6 of the uploaded PDF, A\&E with Language Model as the AR component and DFM distilled as the DM component achieves the smallest MSE of 0.0262, outperforming DFM and DDSM. This confirms the effectiveness of A&E across various tasks. Please see the global response for more details about the additional experiment.
|Method| MSE $\downarrow$|
|-------------------------------------|-------|
|Bit Diffusion (bit-encoding)| 0.0414 |
|Bit Diffusion (one-hot encoding)| 0.0395 |
|D3PM-uniform | 0.0375 |
|DDSM| 0.0334 |
|Language Model| 0.0333 |
|Linear FM | 0.0281 |
|Dirichlet FM| 0.0269 |
|Dirichlet FM distilled| 0.0278 |
|**A\&E** (Language Model+Dirichlet FM distilled) |**0.0262**|
### Q3. Motivation for Promoter Generation Task
Similar to prior work (ExpGAN [1]), conventional promoter design methods based on existing promoter libraries span only a short section of the promoter. In contrast, deep generative models for promoter design explore the distribution properties of promoter sequences, enabling the generation of new promoters that classical methods cannot achieve. This motivation is shared by the DDSM and DFM papers and other DNA generation tasks.
### Q4. How Enformer predictions are used for Evaluation
The Enformer outputs are used to show how the promoter sequence influences the downstream target gene instead of the bins corresponding to the promoter sequence itself. We inserted the generated promoter before the target gene, e.g., TP53, and checked how the promoter changed the expression level of the target gene. While the promoter sequence is only 128 bp, accounting for 1 bin in the prediction, the target gene is more than 10,000 bp long. For example, TP53 is about 25,000 bp, accounting for 195 bins in the Enformer output. Using SSE, we aggregate the change in expression levels across all the cell lines.
### Q5. Token-Level Probabilities from the Diffusion Model ($p^{DM}$)
$p^{DM}$ at line 3 of Algorithm 2 should be $p^{DM}(\mathbf{x}_i) \in \mathbb{R}^{L\times4}$, representing the token level emission probability from the Diffusion Model. $p^{DM}(\mathbf{x}_i)$ can be retrieved for most of the discrete diffusion algorithms such as DDSM, DFM, and DiscDiff. For DNA generation, $p^{DM}(\mathbf{x})$ are usually stored as a variable **logits** $\in \mathbb{R}^{L\times4}$, where each row $p^{DM}(\mathbf{x}_i)$ represents token level probability distribution over \{A,T,G,C\}. $p^{DM}(\mathbf{x})$ are produced after the last sampling steps. In the latent diffusion model used in this paper, it is produced by the second-stage decoder (Appendix A), while in DDSM and DFM, it is produced by a 1D-CNN (output_channel=4) in the score net.
### Q6. Description of EPD Dataset
We detailed the construction process of the DPE dataset in the global response. Briefly, we downloaded all the promoter records (30 million) from EPD datasets. Each record is a triple of (sequence, species, cell types, expression levels). Sequences could be duplicated in these records, so we aggregated these records by sequence, producing 160K unique sequences. Each sequence is a promoter-downstream sequence centered around the TSS of length 256 or 2048.
### Q7. How the Model Learns to Differentiate Between Different Genomic Regions
Following the suggestion from reviewer crFN, we performed BLASTN on the two halves of sampled sequences against promoter-like and protein-like segments from the training set. A larger score indicates higher similarity. The results confirm that A&E better handles heterogeneity than the AR model.
| Blasting Score | Nature Promoter | Nature Protein |
|------------------------|-----------------|----------------|
| A\&E Promoter | 19.40|18.88|
| A\&E Protein | 18.89| 19.00|
| Blasting Score | Nature Promoter | Nature Protein |
|------------------------|-----------------|----------------|
|Hyena Promoter | 18.95|18.98|
|Hyena Protein | 18.86|18.88|
### Q8. Convergence criteria for Algorithm 1
The convergence criteria of Alg 1 is $\tilde{\mathbf{x}}^t$ converge to a fixed distribution, which can be achieved through $|\tilde{\mathbf{x}}^{t+1} - \tilde{\mathbf{x}}^t\| < \delta$.
[1] Zrimec, J., et al., 2022. Controlling gene expression with deep generative design of regulatory DNA
---
Rebuttal Comment 1.1:
Title: Additional Clarification About Enformer Prediction
Comment: We sincerely appreciate your valuable feedback. Below, we would like to provide additional clarification on how to interpret the output from Enformer.
### Additional Clarification About Enformer Prediction
We understand that the CAGE assay displays aligned reads at transcription start sites (TSSs). However, given that many genes, including TP53, EGFR, and AKT1, possess multiple TSSs. While we can only take the bin value corresponding to one TSS, the exact number of TSSs can vary depending on the specific cell type and experimental conditions. Therefore, focusing solely on the bin corresponding to the TSS immediately following the promoter might overlook other important TSSs. This is why we consider the entire prediction range corresponding to the gene. The output from Enformer reflects the CAGE track values across all TSS positions, ensuring more complete coverage of transcription start sites. | Summary: This paper presents a sampling approach called Absorb and Escape (A&E) that combines the strengths of diffusion models (DMs) and autoregressive models (AR models) for generating DNA sequences. The authors rightly point out that DNA sequences are generally composed of segments that do not follow the same distribution (i.e. they are heterogeneous sequences). Then, using a toy example, they clearly illustrate the shortcomings of DMs and AR models for modelling heterogeneous sequences but point out that they have complementary strengths, motivating them to combine these models. Their proposed A&E algorithm (and the practically useful Fast A&E algorithm) can be used at sampling time to leverage these complementary strengths - it first samples a sequence from a DM before iteratively refining it using an AR model. After training DMs and AR models on a new dataset of DNA sequences, the authors show that using Fast A&E leads to better samples when compared to using DMs or AR models alone. They also show that the sequences are sufficiently diverse with Fast A&E samples being less diverse than DM samples while being more diverse than AR model samples.
Strengths: - Originality: The main novel contribution of this paper is the A&E algorithm for combining the strengths of DMs and AR models. The algorithm is very well-motivated, both based on prior work and the toy example. It is also simple to implement and understand. Combining DMs and AR models for generating DNA sequences has also not been attempted by previous work to the best of my knowledge.
- Quality: The toy example is very useful and clearly illustrates the problems with current generative approaches for DNA generation. However, I have reservations about the dataset and task being used for the main evaluation although I am generally convinced of the A&E algorithm's usefulness. My questions and suggestions are listed in the next section.
- Clarity: The paper is very well-written and easy to understand. However, a few details about the evaluations are missing, I have listed them in the next section.
- Significance: The A&E algorithm is likely going to be useful for many computational genomics researchers. Even beyond DNA generation, I think the algorithm is an interesting and intuitive way to blend DMs and AR models and it could lead to more research in this area.
Weaknesses: ### Reservations about the main evaluation
- The authors use gene sequences from the Eukaryotic Promoter Database (EPD) for training and testing various models. I do not see any details about how exactly these sequences are constructed. From Table 2 and Figure 4, I assume that the authors are extracting sequences of various lengths (either 256bp for 2048bp) centered at the TSS, but the paper needs a more comprehensive description of the data processing since this data is used in all of the main experiments.
- I am not convinced of the utility of modelling these DNA sequences as presented in the paper. The authors train conditional diffusion models where the conditions are various species. I cannot think of a use case for sequences generated using this modelling strategy - there is no way to tune the promoter strength or target gene. Other DNA generation models use arguably better tasks. For example, DNADiffusion uses diffusion models to generate cell-type-specific regulatory elements which could be useful for synthetic biology applications. I would like to understand the authors' motivation for using this task.
- The paper could be made much stronger by using the same evaluations as the previous papers to show that the usage of A&E with the DMs proposed in those papers (maybe in combination with HyenaDNA for AR modelling) leads to improved performance.
- Furthermore, the authors somewhat abandon their focus on the heterogeneity of the DNA sequences being modelled in the main results. Although it is clear from Figure 4 that the motifs are located in the promoter as expected, one could perform two more simple analyses to show that A&E actually helps in modelling heterogenous sequences:
- If half the training sequence is promoter-like and the other half is protein-like, the authors could show that this is true in the sampled sequences as well (maybe by BLASTing the two halves against the promoter-like and protein-like segments from the training set).
- An even simpler analysis would be to look at the protein-like segment of the samples and show that it obeys the rules of protein coding sequences - starts with a start codon and there are no premature stop codons.
Technical Quality: 2
Clarity: 3
Questions for Authors: In addition to the questions/suggestions above, I have the following minor ones below:
- Since the absorb step is reliant on using an AR model to refine certain segments, is it better to use a masked language model instead of an AR model? This way, context from both sides of the segment can be incorporated and the probability approximation will be better.
- How are probabilities for a sequence/sequence segment extracted from the DM?
- There are discrepancies between the model names mentioned in lines 222-223 and those in Table 3.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Apart from the assumptions in Appendix C, I could not find any other discussion of the limitations. I can think of the following other limitations:
- The evaluation scheme is based on a single dataset, it is unclear how A&E will generalize to other datasets/settings.
- Tuning $T_{absorb}$ seems non-trivial.
I do not foresee any negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your insightful reviews. Following your suggestions, we have presented additional experiments on DDSM dataset and more detailed description on the dataset in the general response and uploaded PDF. Hopefully our general reply addresses your concerns regarding the selection of $T_{absorb}$ and the extraction of $p^{DM}(\mathbf{x}_i)$. Below, we provide detailed responses to each of your questions.
### Q1. Using the same evaluations as previous papers
We have hereby presented additional experiment results on the *transcription profile conditioned promoter sequence design* task presented in DDSM paper with A\&E. As shown in Table 6 of the uploaded PDF, A&E with the Language Model as the AR component and DFM distilled as the DM component achieves the smallest MSE of 0.0262, outperforming DFM and DDSM. This confirms the effectiveness of A&E across various tasks.
|Method| MSE $\downarrow$|
|-------------------------------------|-------|
|Bit Diffusion (bit-encoding)| 0.0414 |
|Bit Diffusion (one-hot encoding)| 0.0395 |
|D3PM-uniform | 0.0375 |
|DDSM| 0.0334 |
|Language Model| 0.0333 |
|Linear FM | 0.0281 |
|Dirichlet FM| 0.0269 |
|Dirichlet FM distilled| 0.0278 |
|**A\&E** (Language Model+Dirichlet FM distilled) |**0.0262**|
### Q2: Details about how exactly these sequences are constructed
We have detailed the construction process of the EPD dataset in the global response. Briefly, we downloaded all the promoter records (30 million) from EPD datasets. Each record is a triple of (sequence, species, cell types, expression levels). Sequences could be duplicated in these records, so we aggregated these records by sequence, producing 160K unique sequences (as mentioned in the line 218 of original paper). Each sequence is a promoter-downstream sequence centered around the transcription start site (TSS) of length 256 or 2048.
### Q3: Why species-wise generation and not cell-type conditioning?
There are two reasons for setting species conditioning as a task. First, promoter sequences from different species follow different distributions, making it an excellent testbed for benchmarking various generative algorithms. In contrast, cell-type conditioning is less distinct since all cell types share the same genome, and regulatory elements can bind to specific cell types only under certain conditions. However, if enough samples are provided, each sequence will appear in all cell types. In theory, the optimal task format should be (species, cell type, expression) co-conditioning, and this paper presents the first step towards that goal.
Additionally, while we can train a single model for each species to generate promoters, limited data is available for certain species (e.g., P. falciparum in EPD), which is insufficient for training a generative model. By training on cross-species data, our model leverages genomic similarities across different species, enabling promoter sequence generation for species with limited data. One use case of unconditional promoter generation is expanding existing promoter libraries, crucial for synthetic biology. We expect future foundation models for genomics to make use of all available data, just like how LLMs are making use of the web-scale data, and being able to perform downstream tasks with contextual prompts.
### Q4. Heterogeneity of the DNA sequences in the main results
Following your suggestion, we performed BLASTN on the two halves of the sequences sampled by A&E against promoter-like and protein-like segments from the training set. A larger score indicates higher similarity. We will add these additional results to our Appendix.
| Blasting Score | Nature Promoter | Nature Protein |
|------------------------|-----------------|----------------|
| Absorb Escape Promoter | 19.4 | 18.88 |
| Absorb Escape Protein | 18.89 | 19 |
The results confirm the existence of heterogeneity in the generated and training sequences.
### Q5. Is it better to use a masked language model instead of an AR model?
We recognize this as a valuable suggestion. A masked language model could potentially improve the quality of generated sequences by considering the bidirectional context $P(x_{i:j}|x_{0:i-1},x_{j+1:L})$. However, AR model with caches (storing intermediate results for previous generated tokens) could potentially be computationally efficient as it requires at most sequence length number of function evaluations. In contrast, a masked language model needs to consider the whole context each time.
### Q6. How are probabilities for a sequence/sequence segment extracted from the DM?
$p^{DM}(\mathbf{x}_i)$ can be retrieved for most discrete diffusion algorithms (such as DDSM, DFM, DiscDiff). In DiscDiff, logits are available after the second stage decoder layer (Appendix A). In general, $p^{DM}(\mathbf{x})$ are usually stored as a variable **logits** $\in \mathbb{R}^{L\times4}$, where each row $p^{DM}(\mathbf{x}_i)$ represents probability distribution over \{A,T,G,C\} token. More details are available in our general response.
### Q7. Discrepancies between the model names mentioned in lines 222-223 and those in Table 3.
We thank the reviewer for pointing it out. This is a typo, the BitDiffusion in Table 3 should be *DNADiffusion*. We retrain the *DNADiffusion* on our dataset and reported the results in Table 3.
### Q8. How A&E generalises to other datasets/settings and evaluations
This is addressed with the additional experiment and more clarification about metrics and datasets in the general response. We believe motif distribution is essential to DNA generation, as it directly measures how DNA functions.
### Q9. Tuning $T_{absorb}$ seems non-trivial.
We have found a default value of 0.85 effective in many settings. A sensitivity analysis of the algorithm on $T_{absorb}$ is provided in the general response.
---
Rebuttal Comment 1.1:
Comment: I have read the authors’ rebuttal. I hope you’ll include these clarifications and additional information in your revision.
I don’t understand how you conditioned your model on the dense CAGE profile from the DDSM paper.
I appreciate the difficult in evaluating generated DNA sequence quality, but motif composition is so simple that it’s really not very compelling.
I think you’re misunderstanding Enformer predictions. The length of the target gene is irrelevant; Enformer predicts many assays but the closest to gene expression from a promoter is CAGE, and CAGE will produce aligned reads at the TSS only, unaffected by gene length.
Altogether, I’ll maintain my current scores.
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply. I would like to further clarify the following questions:
### I’m unclear about how you conditioned your model on the dense CAGE profile from the DDSM paper.
Our evaluation was conducted using the pretrained checkpoint from the [DFM repository](https://github.com/HannesStark/dirichlet-flow-matching). The A&E algorithm is a general approach that can be applied to various autoregressive (AR) models and diffusion models (DM). In the additional experiment we presented, A&E utilized the **language model** and **distilled DFM** checkpoints as the AR and DM components. Specifically, A&E can be implemented by modifying the [PromoterModule](https://github.com/HannesStark/dirichlet-flow-matching/blob/6f360612da8f69ecf4860c075f74ed52b37ce64b/lightning_modules/promoter_module.py#L21C7-L21C21) class in the original DFM repository. No further changes were made to the underlying models or the score network.
### Regarding the Motif Distribution as the Evaluation metric
We have indeed provided additional evaluation metrics, such as FID-based metrics, for human promoters. Although we have some reservations about the benchmark provided in the DDSM, we agree with the reviewers that it is still important to use it, as it better contextualizes our proposed approach within the existing literature. For other species, using motif distribution is a straightforward approach to assess the quality of generated sequences. For example, if a generative algorithm cannot correctly place a TATA-box in the appropriate position, it is generally unlikely to produce a valid promoter sequence.
PS: I wonder if the reviewer's comment might have been placed in the wrong thread.
---
Rebuttal Comment 1.2:
Comment: Thank you for the detailed response! Most of my questions have been answered but I keep my original score as I am still unconvinced by the main evaluation task in the paper although the additional evaluations are useful. I have a few suggestions:
1. Regarding Q3: I am still unconvinced of the utility of generating new promoter sequences as presented in the paper. I cannot think of a promoter generation scenario that would require us to only condition on the species (or even cell type) without conditioning on promoter strength. I understand that each species can have a different distribution of promoters but each promoter in the distribution has a different strength (i.e. how much expression it can drive). In most generation scenarios, we want a promoter that drives a certain level of expression and not just a random promoter from the distribution of promoters for that species. Therefore, I encourage the authors to consider incorporating expression strength into their conditioning. Alternatively, using an evaluation task similar to DNADiffusion where they only model cell-type-specific regulatory elements could be more meaningful since any random sequence from the distribution being modeled could be potentially useful.
2. Regarding Q5: Given the length of sequences being modeled, I doubt that the efficiency improvements would be very significant when using a masked language model vs. an AR. The improvement in probability estimation might be worth the slight increase in runtime.
---
Reply to Comment 1.2.1:
Title: Further Clarification about promoter generation task
Comment: We sincerely appreciate your suggestions, which are indeed very helpful. However, we would like to offer some additional clarification regarding the purpose of the promoter generation task.
### Further Clarification about promoter generation task
We agree that incorporating promoter strength into the conditioning is a logical extension of our work and something we plan to explore in future iterations. However, we believe that the current approach provides a necessary first step in mapping the regulatory landscape across species, which could lead to new insights that are not apparent when focusing solely on promoter strength.
In contexts such as synthetic biology or evolutionary studies, understanding the full range of available promoters within a species can provide valuable insights into the species’ inherent regulatory potential, independent of expression strength. This foundational knowledge can then inform subsequent research where specific promoter strengths are prioritized. For example, a recent study published in Science [1] demonstrated that human promoter sequences might follow relatively simple syntax. Such studies require a thorough examination of native promoter sequences, and generative models can offer significant insights in these scenarios.
Additionally, from a benchmarking perspective, if a generative algorithm struggles with species-wise generation, it would likely face challenges in other settings where only class information is used for conditioning. Conversely, strong performance in species-specific class conditioning suggests the algorithm may also perform well when conditioning on other factors, such as expression strength classes conditioning.
[1] Dudnyk, K., Cai, D., Shi, C., Xu, J. and Zhou, J., 2024. Sequence basis of transcription initiation in the human genome. Science, 384(6694), p.eadj0116. | Rebuttal 1:
Rebuttal: We appreciated the valuable feedback from all reviewers. Overall, the reviewers agree that our proposed algorithm, A&E, is well-motivated and clearly illustrates the limitations of AutoRegressive (AR) models and Diffusion Models (DMs).
Most reviewers (crFN, m7HE, hZwW) recognize the significant real-world impact of the task of DNA generation. Both reviewers m7HE and crFN believe that the proposed method, A&E, is a plausible solution to bridge the limitations of existing AR models and DMs. Additionally, reviewer crFN suggests that A&E “could lead to more research beyond DNA generation.” While there are some concerns about the evaluation presented in the paper, the additional results below show that A&E achieves state-of-the-art performance on both existing DDSM datasets and the more challenging EPD datasets.
### Benchmarking A\&E against DFM and DDSM
Reviewers suggest benchmarking **A&E** against DDSM[1] and DFM[2] on the *transcription profile conditioned promoter sequence design task* used in DDSM paper. We ran **A&E** on this task with a default threshold $T_{absorb}=0.85$. The same evaluation procedure as the [DFM repository](https://github.com/HannesStark/dirichlet-flow-matching) were used. The A&E model comprises AR and DM components, specifically the **language model** and **distilled DFM** checkpoints provided in the DFM repository. As shown in the table below (this is also Table 6 in the uploaded PDF file), **A&E** achieved state-of-the-art results of **0.0262** on the test split. We are happy to include the results of A&E with different combinations of DM and AR components in the final draft of the paper.
|Method| MSE $\downarrow$|
|-------------------------------------|-------|
|Bit Diffusion (bit-encoding)| 0.0414 |
|Bit Diffusion (one-hot encoding)| 0.0395 |
|D3PM-uniform | 0.0375 |
|DDSM| 0.0334 |
|Language Model| 0.0333 |
|Linear FM | 0.0281 |
|Dirichlet FM| 0.0269 |
|Dirichlet FM distilled| 0.0278 |
|**A\&E** (Language Model+Dirichlet FM distilled) |**0.0262**|
### Sensitivity analysis of $T_{Absorb}$
In response to reviewers Y7is and crFN's request, we include sensitivity analysis results to show the **influence of the hyperparameter $T_{\text{Absorb}}$** of the A&E algorithm in Figure 23 (uploaded as a one-page PDF).
As $T_{\text{Absorb}}$ increases, the motif plot correlation between generated DNA and natural DNA first increases and then flattens. This is because a larger $T_{\text{Absorb}}$ encourages more frequent corrections made by the AR model, which generally improves the quality of generated sequences. However, a smaller value of $T_{\text{Absorb}}$ is more computationally efficient as it requires fewer function evaluations of the AR model. In practice, we found that a value of 0.85 is generally appropriate for different tasks and scenarios, and will add this sensitivity analysis to the appendix of our paper.
### Clarification about $p^{DM}(\mathbf{x}_i)$ from Diffusion Model
$p^{DM}$ at line 3 of Algorithm 2 should be $p^{DM}(\mathbf{x}_i) \in \mathbb{R}^{L\times4}$, representing the token level emission probability from the Diffusion Model. To make it clearer, while $p^{DM}(\mathbf{x}_i)$ is difficult to obtain for continuous DMs, such as DDPM. $p^{DM}(\mathbf{x}_i)$ can be retrieved for most of the discrete diffusion algorithms (such as DDSM, DFM, DiscDiff). For DNA generation, $p^{DM}(\mathbf{x})$ are usually stored as a variable **logits** $\in \mathbb{R}^{L\times4}$, where each row $p^{DM}(\mathbf{x}_i)$ represents probability distribution over \{A,T,G,C\} token. A concrete example is DFM’s official implementation.
### EPD dataset construction process
**Issues with the DDSM dataset** While the DDSM dataset [1] has brought attention to the task of DNA generation, we want to caution against using it as a general benchmark. This is because, in the DDSM task formulation, the condition *CAGE expression value* is an array of real-valued signals with the same length as the sequence to be generated. This differs from most existing generation tasks, where the condition is typically a class. In fact, rather than a generation task, it is more akin to a machine translation task, borrowing a metaphor from NLP. This motivated us to develop a more challenging generation task using the EPD dataset.
**EPD Dataset** Figure 22 in the uploaded PDF illustrates the construction process of the EPD dataset. Promoter-downstream sequence pairs are curated from EPD. While the EPD database contains 30 million samples, we aggregated the sequences to avoid repetition. This differs from the DDSM dataset, where the same promoter could appear in different instances. Two sets of datasets were created: 1)EPD (256): A 256bp window centered on a gene’s transcription start site (TSS), split into an upstream promoter and a downstream segment. 2)EPD (2048): A 2048bp window centered on TSS, covering broader genetic regions.
**Metrics: Motif Distribution** Motif plots have been widely used in prior DNA generation works([3,4,5]). In this work, the motif plots are obtained using [EPDnew analysis tools](https://epd.expasy.org/ssa/oprof.php?series=epdnew&species=hg38). To improve upon prior work, we compute the MSE and the Correlation between two motif frequency distributions $y_1, y_2 \in \mathbb{R}^L$, with the following formulas to qualitatively evaluate the differences between two motif plots. The MSE evaluates the distance of motif distributions, while correlation evaluates the change in the shape of motif distributions.
$\text{MSE} = \frac{1}{L} \sum_{i=1}^L (y_{1}^i - y_{2}^i)^2$
$r = \frac{\sum_{i=1}^L (y_{1}^i - \overline{y_1})(y_{2}^i - \overline{y_2})}{\sqrt{\sum_{i=1}^L (y_{1}^i - \overline{y_1})^2 \sum_{i=1}^L (y_{2}^i - \overline{y_2})^2}}$
[1] Avdeyev, P., et al. DDSM [2] Stark, H., et al. DFM [3] Wang, Y., et al. promoter design [4] Taskiran, I.I., et al. synthetic enhancers. [5] Zrimec, J., et al. ExpGAN
Pdf: /pdf/b52a96e9bde690609f53512d17b7dbbd0697eeff.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Detecting Brittle Decisions for Free: Leveraging Margin Consistency in Deep Robust Classifiers | Accept (poster) | Summary: This paper discovers that many existing neural network classifiers share a strong correlation between input margin (distance to decision boundary) and output margin (difference between top-2 logits). The paper further proposes to use this property for local robustness estimation.
Strengths: Efficient local robustness estimation is an important problem, and the proposed method offers a viable direction for this purpose. The strong correlation between input and output margins is an interesting discovery. Furthermore, the paper clearly formalizes and quantifies this correlation via Kendall rank correlation. The paper is well-written, typo-free, and clearly conveys the results.
Weaknesses: The weaknesses are mainly twofold.
First, the observation of the correlation, while interesting, is not strong enough in my opinion. Given that robust neural networks are known to be Lipschitz [1], the correlation between input and output margins is not very surprising. The main theoretical result (Theorem 1) is relatively straightforward, self-explanatory, and unsurprising.
Second, the authors did not consider the possibility of adaptive attacks. While the correlation between input and output margins is strong when the input itself is benign, it is unclear whether this correlation still holds when the input itself is adversarial. Is it possible to use attack methods to find inputs that break this correlation (i.e., output margin is large but input margin is small), leading to overestimation of the local robustness? Furthermore, the pseudo-margin prediction model adds even more potential vulnerabilities.
[1] Yang et al. A closer look at accuracy vs. robustness.
Technical Quality: 2
Clarity: 3
Questions for Authors: - Equation (3) relies on an *equidistant* assumption. Is this assumption satisfied by real-world models? I found some short discussions in Appendix C, but would prefer a more formal analysis. Furthermore, Line 640 says "the values vary only in a small range." What do the "values" refer to? It also seems to me that Figures 8 and 9 demonstrate non-trivial variance?
- In addition to observing the correlation between input and output margins, the paper also mentions the distances in the feature space. How is the property regarding feature-space distance used in practice? For context, comparing Figure 5a and 5b, it seems that the input-output margin correlation is stronger than the input-feature margin correlation.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review! Please find below our responses to your concerns.
**a/ Lipschtz Smoothness vs. Margin Consistency**
Thank you for raising this point. Lipschitz smoothness is an essential property of robust models since a small Lipschitz constant $L$ guarantees the network's output cannot vary more than a factor $L$ of the input variation. Empirical adversarial training strategies aim in a certain way to achieve Lipschitz's smoothness indirectly with various levels of Lipschitz constants, and it is also possible to directly constraint the Lipschitz constant to achieve $1$-Lipschitz networks ([1] and references therein). However, *Lipschitz smoothness does not imply margin consistency*. Indeed:
*Let $f$ be an $L$-Lipschitz neural network i.e. $||f(x_1)-f(x_2)|| \leq L||x_1-x_2||$, $\forall x_1, x_2$. Let us consider two points $x_1$ and $x_2$ with $0<d_{in}(x_1)<d_{in}(x_2)$. Note that the $L$-Lipschitz condition implies that $d_{out}(x_i)\leq L d_{in}(x_i)$ for $i=1,2$. However, as long as $d_{out}(x_1)>0$, it is clearly possible a priori to have $d_{out}(x_2)<d_{out}(x_1)$, thus violating the margin consistency condition, while still satisfying the previous relations.*
This theoretical possibility is supported by the empirical evidence that we found two robust (hence probably Lipschitz) models which are not margin-consistent. The question of how both properties influence each other is out of the scope of this work and may be grounds for future work. We propose to add this clarification to the paper.
The theorem itself establishes that margin consistency (preservation in the feature space, of the order of the samples' input space distances to the decision boundary) is a necessary and sufficient condition for using the logit margin as a proxy score for the input margin.
**b/ Possibility of adaptive attacks**
Implicitly, margin consistency, like standard accuracy, measures a property of the model over test samples iid with the training data. We leverage it to detect vulnerable iid test samples susceptible to being adversarially attacked. We concede that it may indeed be possible to craft examples that are non-robust ($d_{in} \leq \epsilon)$) but for which $d_{out}>\lambda$, and therefore would bypass the proposed detector. We believe studying the margin consistency on adversarial examples or designing adaptive attacks to be out of the scope of this work.
**c/ (Q1) Equidistance Assumption**
In Appendix C, by "values," we refer to the distances between the classifiers $||w_i-w_j||_q$ and by a small range, we mean that the interquartile range is small compared to the mean value of the distance. The empirical evidence shows that using the logit margin instead of the exact feature margin (minimum over margins to other classes) marginally affects the results and avoids the computational overhead of searching the minimum (See the general response section and Table 1 of the attached PDF file).
**d/ (Q2) The property regarding the feature space distance**
In Figure 5, by the feature space distance, we mean the distance $||h(x)-h(x’)||$ in the feature space between the representations of a sample $x$ and the representation of its adversarial example $x’$. Although the correlation is a bit stronger than with the logit margin, it is as hard to compute as the input margin ($||x-x’||$) since they both require finding the closest adversarial example in the input space.
[1] Araujo, Alexandre, et al. "A unified algebraic perspective on Lipschitz neural networks." arXiv preprint arXiv:2303.03169 (2023).
---
Rebuttal 2:
Title: Thank you for the response
Comment: Thank you for the response. While the response cleared up my confusion on Q2, I still have some remaining questions.
- (Minor) I agree that Lipschitzness and margin consistency are different concepts and it is possible to find counter-examples that satisfy one but not the other. That being said, they are still intuitively and empirically connected. The authors found two models that are "likely Lipschitz" but not margin-consistent, but also found 17 models that are robust margin-consistent (Figure 2). Hence, the theoretical results, while valid, are unsurprising in my opinion.
- (Major) As mentioned in multiple places in the paper, this work focuses on detecting non-robust decisions with logit margins, which often manifest as adversarial examples. If it is possible to construct adversarial examples that can break the margin consistency and deceive the logit margin-based detection, then the purpose of the proposed method is defeated.
***Hence, I kindly disagree that studying the margin consistency on adversarial examples or designing adaptive attacks is out of the scope of this work, and believe that they must be carefully analyzed***. This is especially important because existing work has shown that adaptive attacks can significantly compromise adversary detectors [1].
- (Minor) I am still confused about Figures 8 and 9. The authors mentioned that the distances between the classifiers vary by a small range. However, the data points in Figures 8 and 9 seem all over the place.
Therefore, for now, I maintain a rating of 4.
[1] Carlini, N. and Wagner, D. Adversarial examples are not easily detected: Bypassing ten detection methods.
---
Rebuttal Comment 2.1:
Title: Thank you for the feedback.
Comment: Thank you for the feedback.
(Minor 1) We believe that we have provided enough evidence that Robust networks (or Lipschitz networks) are not necessarily margin-consistent. In particular, Lipschitz does not imply margin consistency. However, we agree that the interplay between these concepts is an interesting direction for future research.
(Major) We do not claim to solve adversarial examples detection. Instead, we aim at detecting (clean) samples on which the network’s decision is non-robust (susceptible of being attacked). When fixing the threshold on the logit margin to obtain a 95% True Positive Rate (for which the corresponding FPR are provided in Table 1), we observe that the logit margins on adversarial examples are almost always (much) smaller than that threshold. By doing this, we are not trying to differentiate between adversarial samples and clean samples which are close to the decision boundary. Instead, in this case, we both flag them as non-robust. As evidence of this, we provide the 99th percentile of the logit margin on adversarial examples together with the logit margin threshold at 95TPR in the table below.
CIFAR10
| Model ID | Logit margin of Adversarial Examples (99 percentile) | Threshold of logit margin at 95% TPR for detection |
|---|---|---|
| AD1 | 0.01 | 1.42 |
| DS0 | 0.61 | 2.19 |
| DM0 | 1.64 | 3.01 |
| MR0 | 0.04 | 8.35 |
| MD0 | 0.06 | 11.04 |
| SE10 | 0.06 | 2.83 |
| PA20 | 0.14 | 1.69 |
| HE0 | 0.07 | 2.82 |
| TR0 | 0.07 | 4.11 |
| CL0 | 0.03 | 5.76 |
| EN0 | 0.13 | 5.06 |
| AL0 | 0.03 | 5.11 |
| AD20 | 0.03 | 1.33 |
| CU80 | 1.32 | 2.02 |
| RE802 | 0.13 | 1.87 |
| ZH0 | 0.17 | 2.87 |
| WA80 | 0.70 | 2.14 |
| AD10 | 0.01 | 1.20 |
CIFAR100
| Model ID | Logit margin of Adversarial Examples (99 percentile) | Threshold of logit margin at 95% TPR for detection |
|---|---|---|
| AD1 | 0.01 | 1.42 |
| AD21 | 0.02 | 1.44 |
| CU81 | 0.13 | 2.09 |
| CU41 | 0.04 | 1.95 |
| DS1 | 0.13 | 1.36 |
| HE1 | 0.06 | 1.81 |
| PA21 | 0.08 | 1.56 |
| RA11 | 0.08 | 1.77 |
| RE812 | 0.17 | 1.73 |
| RE11 | 0.07 | 1.40 |
| RI1 | 0.03 | 3.42 |
| WA81 | 0.09 | 1.59 |
| WU1 | 0.06 | 2.02 |
(Minor 2) Figures 8 and 9 show, on a common y-axis, the distributions of distances between classifiers for each model. Comparing these ranges across models is not meaningful. We agree that these plots alone cannot provide evidence that equidistance holds in these models. In contrast, the results in Table 1 of the rebuttal PDF are evidence that the impact of the distance between classifiers is negligible for margin consistency; the logit margin and the feature margin lead to quite similar ranks. Thank you for pointing this out, we will modify section C of the supplementary material in this sense.
---
Rebuttal 3:
Comment: Thank you for the clarification, which cleared up my confusion on the two minor points. However, I would like to follow up on the possibility of adaptive attacks. I will raise my score if these questions can be sufficiently addressed.
> We aim at detecting (clean) samples on which the network’s decision is non-robust (susceptible to being attacked).
In this case, are the applications of the proposed method restricted to the case where 1) adversarial robustness is of interest; 2) clean examples are always provided?
Could you please provide a motivating scenario that satisfies this assumption?
> We observe that the logit margins on adversarial examples are almost always (much) smaller than that threshold.
This is where an adaptive attack is important. With a white-box adaptive attack, is it possible to find adversarial examples (which trick the model into mispredictions) with larger logit margins? Is it possible to find examples that do not necessarily lead to mispredictions, but induce large logit margins without actually being far from the decision boundaries (i.e., lead to robustness overestimation)?
Also, what types of attacks are used to generate the tables?
> We are not trying to differentiate between adversarial samples and clean samples that are close to the decision boundary. Instead, we both flag them as non-robust.
I agree that the proposed method can detect *clean* examples close to the decision boundary. However, for the above reasons, I am not fully convinced that adversarial examples can also be flagged.
---
Rebuttal Comment 3.1:
Title: Thank you once more for the reply
Comment: 1/ We believe it is interesting in the scenarios where, indeed, you would like to know the adversarial robustness of a sample (1) but may not afford to perform heavy adversarial attacks or use an even more expensive formal verification method. We agree that our results only provide empirical evidence that the logit margin is a good measure of the adversarial robustness of a clean IID sample (2). Here are two applications that we have in mind:
* Given a large dataset, the logit margin can provide a reasonable estimate of the empirical robust accuracy by only performing attacks on a small subset of the dataset. The tables below show the robust accuracy estimates when attacking only 500 samples and compare them with the accuracies reported on the benchmark. The estimation is made by finding the logit margin threshold at 95%TPR for non-robust detection at $\epsilon=8/255$ on the subsample, then using that threshold to predict over the whole 10k test samples.
* In a real-time deployment scenario, the use of logit margin can be particularly beneficial if you know that your model is margin-consistent. In such a case, you can determine in real-time, just from the forward pass, which samples are vulnerable to adversarial attacks without actually performing an attack. This capability could be used for monitoring or making decisions, with the disclaimer that local robustness does not indicate whether the sample is wrong or not, and that the detection is not perfect. For instance, this could be important when the uncertainty of the camera sensor is known.
CIFAR10
| Model ID | Estimated Robust Accuracy | Reported Robust Accuracy |
|---|---|---|
| MD0 | 38.55 | 36.91 |
| MR0 | 39.61 | 39.12 |
| CL0 | 41.54 | 40.08 |
| AL0 | 40.94 | 40.21 |
| TR0 | 43.19 | 42.23 |
| EN0 | 50.91 | 49.25 |
| AD10 | 51.79 | 51.06 |
| AD20 | 53.42 | 52.48 |
| ZH0 | 53.87 | 53.08 |
| HE0 | 54.32 | 54.92 |
| SE10 | 56.00 | 55.54 |
| DS0 | 57.59 | 56.14 |
| DM0 | 59.77 | 57.27 |
| RE802 | 60.82 | 60.73 |
| PA20 | 63.04 | 61.04 |
| WA80 | 68.29 | 67.31 |
| CU80 | 69.01 | 67.73 |
CIFAR100
| Model ID | Estimated Robust Accuracy | Reported Robust Accuracy |
|---|---|---|
| RI1 | 20.59 | 18.95 |
| AD1 | 27.51 | 27.14 |
| AD21 | 27.89 | 27.67 |
| HE1 | 29.56 | 28.42 |
| RE11 | 29.11 | 28.50 |
| WU1 | 29.38 | 28.86 |
| RA11 | 29.64 | 28.88 |
| PA21 | 31.69 | 31.08 |
| CU41 | 31.61 | 31.65 |
| RE812 | 33.11 | 32.06 |
| DS1 | 33.44 | 32.19 |
| WA81 | 38.96 | 38.83 |
| CU81 | 40.32 | 39.18 |
2/In the tables, we have used the adversarial attacks generated by the FAB attack. We acknowledge without reserve that adaptive attacks or other types of attacks may produce adversarial examples that can actually have bigger logit margins while being close to the decision boundary. We apologize if we cannot provide more results on this matter. While we have given it some thought, it is not clear to us at this point, how to generate such adversarial examples.
---
Rebuttal 4:
Comment: Thank you for the response.
**Regarding the two applications:**
> Given a large dataset, the logit margin can provide a reasonable estimate of the empirical robust accuracy by only performing attacks on a small subset of the dataset.
***This makes sense. Hence, I have increased my rating to 5.*** Please add this result to the paper.
> In a real-time deployment scenario, you can determine in real-time, just from the forward pass, which samples are vulnerable to adversarial attacks without actually performing an attack.
In a deployment scenario, attacks would be performed by some attacker outside the control of the deployer. Hence, the deployer may only receive adversarial examples and not have access to clean examples. Since the proposed method is only proven to work on clean examples, I am not convinced that it would be effective in this scenario.
**Regarding FAB attack:**
Is it the case that FAB attack finds minimally perturbed adversarial examples, and stops as soon as it finds one? If this is the case, then it makes sense why the output margins of adversarial examples are tiny. Does a similar phenomenon still hold for other attack algorithms that do not terminate early, such as untargeted and targeted PGD?
**Regarding adaptive attack:**
> While we have given it some thought, it is not clear to us at this point, how to generate such adversarial examples.
As mentioned above, PGD is an option. You can modify the attack objective, so that increasing output margin becomes part of the goal.
You may also try AutoAttack. However, since the original AutoAttack also stops as soon as it finds an adversarial example, some modifications are needed to increase output margin (see *Minimum-Margin AutoAttack* in Appendix B.1 of [1] for an example).
Regarding generating examples that lead to margin overestimation (but do not necessarily change the predicted class), one possibility is to run a PGD attack with an objective that maximizes output margin while restricting the distance from the nominal point (thereby restricting the input margin).
[1] Bai et al. MixedNUTS: Training-Free Accuracy-Robustness Balance via Nonlinearly Mixed Classifiers.
---
Rebuttal Comment 4.1:
Comment: Thank you for your response.
* Regarding the deployment scenario, we agree that the model can receive all kinds of examples, including adversarial examples. For such instances, we confirm below on one of the strongly margin consistent models that adversarial examples crafted with standard attacks are indeed flagged as non-robust (in the same way as the non-robust clean examples) even if we cannot tell if they are adversarial or clean.
* The standard AutoAttack sequentially runs the untargeted APGD-CE, the targeted APGD-DLR, the targeted FAB and the Square Attack. For evaluation, it stops as soon as it finds adversarial examples below the $\epsilon$ threshold. The latter attacks might not be run on all samples or not at all. In order to get adversarial examples for each of these attacks, we ran the evaluations one by one and collected the adversarial examples produced. We have also run the Carlini-Wagner attack (CW, cleverhans pytorch implementation) and the tentative adaptive attack using PGD that maximises the logit margin (PGD-MC). The table contains the 99 percentile value of the adversarial logit margins and shows that they are all less or equal to the logit margin threshold found to detect 95%TPR for non-robust detection (1.20, from the previously provided table).
Model: AD10, $\epsilon=8/255$, 95TPR logit margin threshold=1.20
| | APGD-CE | APGD-DLR | FAB | SQUARE | CW | PGD-MC |
|---|---|---|---|---|---|---|
| adversarial logit margin (99 percentile) | 0.928 | 0.779 | 0.004 | 0.204 | 0.004 | 1.230 |
Even if the tentative adaptive PGD-MC finds bigger logit margins than other attacks, the 99th percentile is still quite at the limit (with 95th percentile = 1.07). Although this by no means dismisses the possibility of creating more effective adaptive attacks, it suggests that the task may not be as straightforward as initially perceived.
* To find the minimally distorted adversarial for all samples, we used a FAB attack with a sufficient budget [1] and unbounded threshold, which does not stop the search as in the original AutoAttack.
[1]Xu, Yuancheng, et al. "Exploring and exploiting decision boundary dynamics for adversarial robustness." arXiv preprint arXiv:2302.03015 (2023). | Summary: The authors propose using the distance between the two max values of a model's output as a proxy for the input margin to efficiently identify samples vulnerable to adversarial attacks. This proposed margin consistency is formally defined and shown to work across many robust models on the CIFAR10 and CIFAR100 datasets. Additionally, a basic learned fix is proposed for the few models which did not have margin consistency.
Strengths: (S1) Proposed margin consistency for vulnerable sample detection in the context of robust models appears novel. Using logit statistics to detect adversarial examples is not new [1,3,4], and attacks have incorporated the difference between the two max logits [2]. However, this specific formulation (1), being on the defending side (2), and evaluating robust models (3) appear to be a novel combination.
(S2) Experiments are included for a wide variety of models, and a variety of metrics are provided for balanced evaluation. Additionally, the basic modification proposed for weakly correlated models significantly improves results.
(S3) Writing is clear, and mathematical formulation appears sound.
[1] Wang, Yaopeng, et al. "Model-agnostic adversarial example detection through logit distribution learning." 2021 IEEE International Conference on Image Processing (ICIP). IEEE, 2021.
[2] Weng, Juanjuan, et al. "Logit margin matters: Improving transferable targeted adversarial attack by logit calibration." IEEE Transactions on Information Forensics and Security 18 (2023): 3561-3574.
[3] Aigrain, Jonathan, and Marcin Detyniecki. "Detecting adversarial examples and other misclassifications in neural networks by introspection." arXiv preprint arXiv:1905.09186 (2019).
[4] Ozbulak, Utku, Arnout Van Messem, and Wesley De Neve. "Not all adversarial examples require a complex defense: Identifying over-optimized adversarial examples with IQR-based logit thresholding." 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019.
Weaknesses: (W1) Additional experiments. Including experiments for one or two non adversarially robust models would help show the limitations (or additional potential) of margin consistency. Additionally, evaluation of a robust model trained on ImageNet would help show the scalability of the method. Pretrained robust ImageNet models are available [5], so if initial compute is not a factor computation should not be a hinderance.
(W2) No additional analysis for why margin consistency fails for the 2 CIFAR10 models.
[5] https://github.com/RobustBench/robustbench
Technical Quality: 4
Clarity: 3
Questions for Authors: (Q1) What is the computational cost to build the dataset required for training in section 3.3? And in general, what is the computational cost required for the proposed method? Not as much of an issue due to adversarially robust models already requiring more compute to train and the proposed method taking minimal compute at inference.
(Q2) How successful would an adversarial attack maximizing the difference of the 2 max logits be in bypassing the proposed detection? Experiments for this may be slightly outside the scope of the paper.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review! Please see our comments in the general response about standardly trained models with zero adversarial accuracy and results on ImageNet. Below are our responses to your other concerns.
**a/ (W2) No additional analysis for why margin consistency fails for the 2 CIFAR10 models.**
We agree that further analysis of these two models may be helpful. However, we believe it may be closely related to a broader question on the relationship between robustness and margin consistency, which can be an avenue for future exploration. We have added an analysis in Figure 1 (PDF attached to the general response section) that gives further insights. It shows a disparity of correlations across different classes, while it is almost uniform in margin-consistent models (top row).
**b/ (Q1) Computational cost.**
Estimating input margins requires running the FAB attack with a sufficient budget [1]. The average time to estimate the input margins over 1000 samples is the following (on 16GB GPU-Titan XP):
- CIFAR10: ResNet-18 (6min), ResNet-50 (19min), WideResNet-28-10 (35min)
- CIFAR100: ResNet-18 (58min), ResNet-50 (7h), WideResNet-28-10 (6h)
For ImageNet models, a ResNet-50 takes about 1 day 20hrs to 2 days on V100-32GB.
**c/ (Q2) How successful would an adversarial attack maximizing the difference of the two max logits be in bypassing the proposed detection?**
We believe it is possible to attack a sample to bypass that detection, but we think this is beyond the scope of this work. Nevertheless, in our case, we checked (line 178) that our non-robust samples are almost similar to the ones found by the standard AutoAttack evaluation which includes APGD-DLR that maximizes the DLR loss (Difference in Logits Ratio), which is a rescaled difference between the logit of the true class and the biggest logit among the others. This would seem to indicate that the logit margin can also detect vulnerable samples to this sort of attack.
[1] Xu, Yuancheng, et al. "Exploring and exploiting decision boundary dynamics for adversarial robustness." arXiv preprint arXiv:2302.03015 (2023).
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. After reviewing the comments and concerns of other reviewers and your responses, I have decided to maintain my current rating. | Summary: It is currently difficult to determine how susceptible a given input to a model is to adversarial perturbations. The distance from the input to the model's decision boundary in the input space (input space margin) is a reasonable metric, but it is intractible to compute for many deep neural networks and not always meaningful. This paper investigates the use of logit margin as a proxy for input margin. The logit margin is defined to be the difference between the two largest logits. Furthermore, a model is said to be margin consistent if there is a monotonic relationship between input margin and logit margins. Theorem 1 justifies that, if a model is margin consistent, then logit margin can be used to detect non-robust samples. Experiments show that common deep learning models can display a high level of margin consistency.
Strengths: This work represents an interesting new perspective on detecting adversarial attacks. Rather than detecting adversarial examples themselves, the method presented here focuses on detecting when example is at risk of being attacked. This approach may lead be useful in the future in the design of new defense systems against adversarial attacks.
The experimental section of this work is strong. In particular, the correlation between input margin and logic margin of robust models is striking in figure 2 and serves as support for the plausibility of this metric.
The authors present a compelling hypothesis for when margin consistency might hold in machine learning models, relating margin consistency to isometry. This may lead to an interesting line of future work. A thoughtful commentary on directions for future work is also provided in Section 5.
Weaknesses: While small logit margin may be associated with susceptibility to attacks, there may be other reasons we would not want to flag samples in this manner. For example, if a model exhibits differential performance on different subpopulations, certain subpopulations as a whole might be labeled as "brittle decisions." I think that an exploration and discussion of how pervasive brittle decisions are and how they are distributed in practice is warranted and would strengthen this paper.
Similarly, flagging brittle decisions may lead to a false sense of security. Adversarial examples have been shown to induce very high logit margins [1]. It is not clear to me that this method would be able to detect that an adversarially perturbed sample is close to a decision boundary. An enlightening experiment would be similar to figure 5a, except the samples have been adversarially perturbed to maximize logit margin.
[1] Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. "Explaining and harnessing adversarial examples." arXiv preprint arXiv:1412.6572 (2014).
Technical Quality: 3
Clarity: 3
Questions for Authors: - How would using a different distance measure change your results (i.e. measuring distance with $l_2$ norm rather than $l_\infty$ norm)?
- How do you think the experimental results would change if you were using a higher dimensional/more difficult to classify dataset? Would you expect the correlations observed in these results to hold?
- Do you think this method and your results have any implications for non-robust classifiers?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations are discussed in section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review! Please see our comments in the general response for answers to your questions (Q1 and Q2 are addressed in the third point of the general response, and Q3 is addressed in the last point). Below are our responses to your other concerns.
**a/ Bias across subpopulations**:
Local robustness bias exists indeed in models; thank you for bringing up this interesting comment. Interestingly, when margin consistency (approximately) holds, we observe that robustness discrepancies across classes in input margin can also be observed using the logit margin (see top row, Figure 1 of the PDF attached to the general response section). We also observe that margin consistency (approximately) holds for each class individually, so no bias seems present as far as margin consistency goes. For the weakly margin-consistent models (example shown in the bottom row of Figure 1, attached PDF), there are significant disparities between the correlations across classes, which is also why using the logit margin for such models would be problematic.
**b/ False sense of security and detection of adversarial examples:**
Margin consistency, like standard accuracy, measures a property of the model over test inputs sampled iid from the training data distribution. From the objective they optimize, adversarial examples are close to the decision boundary and, therefore, have very small input margins, and we also observe that they have very small logit margins when compared to clean examples. However, we agree that it could, in principle, be possible for specially crafted adversarial examples to bypass margin consistency. This could be interesting for future work. The detection of adversarial examples itself is a different task, considered a defence strategy, and [1] shows that it is an equivalent task to robust classification. We believe studying margin consistency on adversarial examples or designing adaptive attacks is out of the scope of this work.
[1] Tramer, Florian. "Detecting adversarial examples is (nearly) as hard as classifying them." International Conference on Machine Learning. PMLR, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and for the new experimental results. My concerns have largely been addressed. However, I still believe that an investigation of adversarial/adaptive attacks would help the reader understand in what situations margin consistency is expected to hold. I have raised my score to a 6.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: Thank you for your kind response and for raising your score. Note that we have provided some results on the logit margin on adversarial examples in our response to reviewer WVo4. | Summary: The paper addresses the problem of efficiently detecting vulnerable inputs to a robust deep classifier at test time without the need for running adversarial attacks or formal verification. They introduce the idea of margin consistency of a classifier to connect the input-space margin and feature-space margin (or logit margin). A classifier is margin consistent if there is a monotonic increasing relationship between the input margin and logit margin. They show that margin consistency is a necessary and sufficient condition in order for the logit margin to be used as a score to detect (separate) "non-robust" (vulnerable) samples from "robust" samples.
Using a number of robust models trained using various adversarial training methods, from the RobustBench, they empirically show that a vast majortiy of these models exhibit high margin consistency (measured via a Kendall Tau correlation). For these models with high margin consistency, they also evaluate the detection power of the logit margin in separating non-robust samples from robust samples.
Strengths: 1. The paper address an important and practical problem of detecting vulnerable inputs to a robust deep classifier in an efficient way using the logit margin as a proxy for the input margin (when the model satisfies approximate margin consistency).
1. The development of ideas and presentation (with figures) is mostly clear and easy to follow.
1. The idea of connecting the input margin and logit (or feature space) margin via the margin consistency, and using it for detecting vulnerable (non-robust) inputs looks novel.
1. The experiments are extensive and explore interesting questions.
Weaknesses: 1. It's not fully clear why the paper does not use the feature-space margin directly instead of the logit margin. Taking the minimum of the distance to the decision boundaries (over classes $j \neq i$, where $i$ is the predicted class) in Equation 2 should give the feature-space margin. Why then is the logit margin, which is an approximation of this, needed? Because the equidistance assumption (line 123) may not hold in practice.
1. The proof of Theorem 1 is not precise and needs some clarifications (details in the "Questions" section).
Technical Quality: 3
Clarity: 3
Questions for Authors: ### Proof of theorem 1
**Sufficiency**: It seems like the finite sample $S$ assumption is not needed. \
Suppose $f_\theta(x)$ is margin consistent, and $A^S_\epsilon$ is the set of non-robust samples from $S$ for a given $\epsilon > 0$ (as defined in the paper).
Let
$\epsilon_0 = \sup\\{ d_{in}(\mathbf{x}) : \mathbf{x} \in A^S_\epsilon \\}$ and $\lambda_0 = \sup\\{ d_{out}(\mathbf{x}) : \mathbf{x} \in A^S_\epsilon \\}$.
For any non-robust sample $\mathbf{x} \in A^S_\epsilon$, its logit margin satisfies $d_{out}(\mathbf{x}) \leq \lambda_0$ due to the margin consistency. Therefore, $\lambda_0$ perfectly separates the non-robust samples from the robust samples based on the logit margin.
(This was a little unclear in the paper and does not need a finite $S$).
**Necessity**: It is important to state that the proof is by contradiction. Suppose that for any robustness threshold $\epsilon$, $d_{out}$ admits a threshold $\lambda$ that perfectly separates the non-robust samples from the robust samples. Assume that $f_\theta$ is *not* margin consistent. It seems to me that a three point example shows the contradiction better, since with two points, it is always possible to separate the points with the inequality direction reversed.
Since the model is not margin consistent, there exist points $S = \\{ \mathbf{x}_1, \mathbf{x}_2, \mathbf{x}_3 \\}$ such that: i) in the input margin $d\_{in}(\mathbf{x}_1) < d\_{in}(\mathbf{x}_2) < d\_{in}(\mathbf{x}_3)$ and ii) in the logit margin $d\_{out}(\mathbf{x}_1) < d\_{out}(\mathbf{x}_3) < d\_{out}(\mathbf{x}_2)$. Letting $\epsilon = d\_{in}(\mathbf{x}_2)$ (or slightly larger), we get the set of non-robust samples to be $A^S\_\epsilon = \\{ \mathbf{x}_1, \mathbf{x}_2 \\}$.
However, in this case, there exists no threshold $\lambda$ for the logit margin that can cleanly separate the non-robust samples $\\{ \mathbf{x}_1, \mathbf{x}_2 \\}$ from the robust samples $\\{ \mathbf{x}_3 \\}$. Since this leads to a contradiction, the model must be margin consistent.
### Questions and Comments
1. In Eqn 2, $DB_j$ is not defined. I suppose it is $DB_j = \\{ \mathbf{z} \in R^m : (\mathbf{w}_j - \mathbf{w}_i)^T \mathbf{z} + b_j - b_i \geq 0 \\}$.
1. Referring to lines 123 -- 124, the equidistance assumption seems to be glossed over, and it is thereafter assumed that the robust deep classifiers satisfy the property? As mentioned under "Weaknesses", why can we not use the feature space margins?
1. Line 133: What does `output scores` refer to here? And clarify if it is the logit margin.
1. Line 136: It may be worth mentioning here that negative values of Kendall Tau (upto $-1$) imply that the two rankings are anti-correlated or reversed.
1. In section 2.3, the perfect discriminative function $g$ should be defined for a specific robustness threshold $\epsilon \geq 0$ and is a function of the classifier $f_\theta$. So it would be clearer to show this in the notation, e.g. as $g_\epsilon(\mathbf{x} ; f_\theta)$. Also the indicator function does not need the extra argument $\mathbf{x}$. That is, it can just be $\mathrm{1}\_{[d_{in}(\mathbf{x}) \leq \epsilon]}$
1. In Figure 3, it should be $x_2 \in A_{\epsilon_0}$, not $x_2 \in A_{\epsilon}$. Also, on line 152, it should be $\lambda_0 = d_{out}(\mathbf{x}_0)$, to correspond to the figure.
1. The caption for Figure 4 is not so clear.
1. Lines 191, 192: the statement `... margin consistency is a property orthogonal to the robust accuracy ...` seems like a strong statement to make. It is possible that there could be a non-zero correlation (mutual information) between the two.
1. Figure 2 can be placed closer to the results section for readability.
1. Line 215: the word "boundary" is missing after "decision".
1. Based on line 220, it seems like a standard pointwise regression using MSE (and not learning to rank) is used to learn the mapping from the feature representation to a pseudo-margin. Could you also clarify why the Sigmoid activation is used at this network's output, for either regression or L2R?
1. Does the property of margin consistency and the detection of vulnerable samples extend to other norms such as $\ell_2$ norm?
1. Section 4, under OOD detection: the objective is usually to detect inputs that are from new classes not seen by the classifier during training, but it could also include detecting inputs that are from a different (shifted) marginal distribution, i.e., covariate-shifted OOD. It would be useful to cite a survey paper such as [Generalized out-of-distribution detection: A survey](https://arxiv.org/abs/2110.11334)
1. For mis-classification detection, a couple of relevant references are missing [2] and [3]. \
[2] https://papers.neurips.cc/paper_files/paper/2021/file/2cb6b10338a7fc4117a80da24b582060-Paper.pdf \
[3] https://openaccess.thecvf.com/content/CVPR2023/papers/Zhu_OpenMix_Exploring_Outlier_Samples_for_Misclassification_Detection_CVPR_2023_paper.pdf
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Yes, there is discussion on the limitations and scope of this work in section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review! We will take the corrections into account. Please see our comments in the general response about defining the margin consistency in terms of the logit margin and results on the $\ell_2$ norm.
**a/ The proof of Theorem 1.**
Thank you for reading the proof carefully; we agree that there was some clumsiness in the sufficiency part. In this sense, we suggest modifying line 625 by:
*"Let $x_0$ be an element of the finite set $A^S_\epsilon$ such that $d_{in}(x_0)=\max \{d_{in}(x): x\in A^S_\epsilon\}$ and let $\lambda = d_{out}(x_0)$."*
We agree that formulating perfect separation for finite samples only is not fundamentally necessary. However, it appears to us that this opens the way for a simpler proof which avoids dealing with the intricacy of the continuum. We propose to add a comment in this sense in the paper.
As for the necessity part, we need to indicate the proof is by contradiction, thank you for noticing this. We agree that our notion of perfect separability is oriented'' (i.e. non-robust points are required to be the ones with small $d_{out}$) and that we use this property in the proof. We understand that "it is always possible to separate the points with the inequality direction reversed", but this happens if and only if the model is "reversed margin consistent" ($d_{in}(x)<d_{in}(y)$ iff $d_{out}(x)>d_{out}(y)$), which cannot happen for the logit margin which is always non-negative and takes the value zero on that lie on input space decision boundaries (i.e. $d_{in}(x)=0$ implies $d_{out}=0$). While the three points counter example is attractive, it seems to us that the existence of such an example does not follow from the negation of margin consistency. We, therefore, suggest keeping this proof as it is while adding some comments that address the points you rightfully raised.
**b/** $DB_j$ In Equation 2 should be the boundary $DB_{ij}$ defined similarly in the paragraph above
$$\text{DB}_{ij} = \\{z' \in \mathbb{R}^m: (w_i-w_j)^\top z'+(b_i-b_j)=0\\}$$
It is a good catch; we will correct it.
**c/** Yes, in Line 133, we indeed refer to the logit margin as a score computed from the neural network's output (logits). We will clarify that.
**d/ Lines 191, 192: the statement ... margin consistency is a property orthogonal to the robust accuracy ... seems like a strong statement to make.**
We recognize that it is not the correct way to convey our message here, which is that being robust does not imply margin consistency (cf. discussion on Lipschitz smoothness vs. margin consistency, the first point of the answer to Reviewer WVo4). There may be indeed more to explore about the connection between the two properties. We propose to remove that statement and add the discussion about Lipschitz's smoothness and margin consistency.
**e/ Pseudo-margin learning and the Sigmoid activation**
The purpose of our study on pseudo-margin learning is to provide evidence that it is possible to learn to map the features $h_\psi(x)$ to a proxy $d_{out}$ that simulates the margin consistency. Our approach is inspired by the setup used for confidence learning [1] (ConfidNet): a target confidence $y \in [0,1]$, Sigmoid as output activation with MSE loss. In our case, we use $y=\frac{d_{in}(x)}{\max \\{ d_{in}(z): z\in S\\}} \in [0,1]$ as a target, where S is the training set. In L2R, the earliest and simplest methods are point-wise methods, which are no different from standard regression with the MSE loss [2]. This worked worked well for our purpose.
[1] Corbière, Charles, et al. "Addressing failure prediction by learning model confidence." Advances in Neural Information Processing Systems 32 (2019).
[2] He, Chuan, et al. "A survey on learning to rank." International Conference on Machine Learning and Cybernetics. Vol. 3. Ieee, (2008).
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for your careful responses and additional results. I have read the other reviews and author responses. The points raised by Reviewer WVo4 regarding the method's applicability to adversarial inputs, adaptive attacks, and Lipschitz smoothness are important.
I would encourage the authors to include some discussion on these aspects in the revised paper. Particularly, the method's potential vulnerability to carefully crafted adversarial inputs should be acknowledged as a limitation. Some of the results on adversarial inputs (from the review discussions) could also be included in the paper's appendices. The results on estimating the robust accuracy using the logit margin from a small test sample are interesting. These applications of the method could be included as motivation in the introduction (if not done already - I don't recall).
Overall, I think the paper makes a valuable contribution towards a light-weight method for detecting non-robust or vulnerable samples at test time for robust models that satisfy margin consistency. Therefore, I will maintain my current score of 7 in favor of acceptance. | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their appreciation and thoughtful comments! The points raised by the reviewers are certainly very useful for clarifying and improving the presentation of our work while also bringing interesting avenues for future exploration. Below are some general comments about the margin consistency that are relevant to all reviewers. We individually respond to other reviewers' concerns right after.
1. Margin consistency is an order preservation property not implied by robustness. More precisely, Lipschitz's smoothness does not imply margin consistency—see details on that in the answer to reviewer WVo4. However, knowing how local robustness influences margin consistency and vice versa is an avenue for future exploration.
2. While equidistance may not be achieved perfectly in practice, we do not need perfect equidistance for the logit margin to be a practical approximation of the exact feature margin. Computing the exact feature margin requires computing the minimum over (K-1) pairs of scaled logit differences, where $K$ is the number of classes. The approximation circumvents the computational overhead of the minimum search, which can take a second instead of just microseconds for inference. This difference can add up to hours at scale, offering scalability when dealing with a large number of classes. Table 1 in the attached PDF shows side-by-side the results when using the logit margin (Lm columns) and the exact feature margin (Fm columns). There is little to no difference between the two results.
3. Results for some ImageNet models in $\ell_\infty$ and $\ell_2$-robust models in Robustbench (Table 2, attached PDF file) show that these models are also strongly margin consistent. However, we do not exclude that exceptions may exist.
Note that measuring the $\ell_\infty$ norm for an $\ell_\infty$ robust model makes more sense than measuring another norm like $\ell_2$ because an $\ell_\infty$-robust model is not necessarily $\ell_2$-robust and vice-versa. However, the measured input margins in the $\ell_2$ norm and $\ell_{\infty}$ norm for the models we investigated are highly correlated, so the results do not change if we measure $\ell_2$ instead. We provide results in the cases where $\ell_2$ norm matters (i.e., for $\ell_2$-robust models, available only for CIFAR10 in Robustbench).
4. While we believe that margin consistency is far less attractive for standard models (for which almost all decisions are non-robust), we nonetheless provide Kendall $\tau$ correlations for a variety of standard models in the table below. Models can be margin-consistent without being robust and vice-versa. It is unclear at this point how we could leverage the margin consistency for non-robust models.
| Model ID | Kendal tau | Accuracy | Architecture | Dataset |
|-----------------------|------------|----------|------------------|----------|
| STD\_M1 | 0.45 | 93.07 | ResNet18 | CIFAR10 |
| STD\_M2 | 0.69 | 92.60 | ResNet20 | CIFAR10 |
| STD\_M3 | 0.36 | 93.65 | ResNet-50 | CIFAR10 |
| STD\_M5 (Robustbench) | 0.50 | 94.78 | WideResNet-28-10 | CIFAR10 |
| STD\_M6 | 0.75 | 72.63 | ResNet-56 | CIFAR100 |
| STD\_M7 (Robustbench) | 0.70 | 76.52 | ResNet-50 | ImageNet |
Pdf: /pdf/ed359ece5496be3cc6295908a21ccc4e81dcd604.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
fMRI predictors based on language models of increasing complexity recover brain left lateralization | Accept (poster) | Summary: The paper studies how well 28 large language models (LLMs) predict fMRI activity of human subjects listening to an audiobook. First, they observe a scaling law; the neural predictivity of LLMs increases linearly with the logarithm of the number of model parameters, a result consistent with prior work. Second, they show larger models predict left-hemisphere activity better than right-hemisphere activity, and this left-right asymmetry increases with model size.
Strengths: 1. The paper is written clearly and has put effort into sufficiently explaining the methodological details.
Weaknesses: From most to least significant:
1. For the paper's main contribution, i.e., asymmetry in Left-Right neural predictivity that increases with model size (Figure 6), the paper does not provide sufficient experimental/theoretical analyses into possible reasons, nor an adequate attempt into possible explanations/interpretations.
2. The growing L-R asymmetry result may suggest qualitatively interesting differences between larger vs smaller LLMs, as briefly mentioned in Lines 256-260 in the paper. However, an alternative reason could be that right-hemisphere activity is just lower in general (as shown in many prior studies), or more noisy / less consistent across subjects (especially since this paper uses a group-average of fMRI activity).
- The L-R difference does not seem qualitatively interesting, because both trends in Fig. 6a are straight lines. This seems to simply hint at left-hemisphere activity being easier to predict. If the left-hemisphere line were exponentially increasing instead, that may hint at a qualitatively interesting difference between larger vs smaller models. But instead, we see two straight lines. In fact, the L-R difference seems to already be present with models with 350M parameters, and the gap is only linearly widening as model size increases.
3. For the growing L-R asymmetry result, there is no significant effect in many key regions of the language network (IFG, temporal pole) (Figures 4 and 7) (Lines 225-227). The strongest effects seem to occur in AG, which some consider not part of the language network [1]. This is a relevant weakness since this paper focuses on the language network.
4. They compute a group average of fMRI activity across subjects before using LLMs to predict fMRI activity (Line 64), and identified regions of interest (ROIs) in the language network using anatomical locations (Line 95-98), rather than functional localization. However, these methodological choices may have issues when studying the language network (see Box 1 of [1]). Because of inter-individual variability in the precise anatomical locations and sizes/shapes of functional areas in the language network, any voxel defined in a common anatomical space often corresponds to different functional areas across individuals. Hence, using group averages and anatomical ROI localization can lead to the blurring of neighboring areas and information loss [1].
[1] E. Fedorenko, “The language network as a natural kind within the broader landscape of the human brain,” Nature Reviews Neuroscience.
Technical Quality: 2
Clarity: 3
Questions for Authors: Just a minor suggestion to fix typos:
1. Hyperlinks to appendix figures do not work correctly. E.g., "Fig. B.1" (Line 162), also: Lines 173, etc. These are the hyperlinks that start with "Fig B."
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Limitations not mentioned in the paper:
1. See Weaknesses 3 and 4.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. Here is a point-by-point response to your comments.
> The growing L-R asymmetry result may suggest qualitatively interesting differences between larger vs smaller LLMs, as briefly mentioned in Lines 256-260 in the paper. However, an alternative reason could be that right-hemisphere activity is just lower in general (as shown in many prior studies), or more noisy / less consistent across subjects (especially since this paper uses a group-average of fMRI activity).
Thanks for raising the point of left vs right signal-to-noise in the data. Indeed, one intuitively expects higher brain correlations in regions where the signal-to-noise ratio is better and therefore predicts left-right asymmetries based on this point. First of all, note that here, we compare a large variety of models using the same data, so that the quality of the data is in fact a constant across models. This is different from the previous study (Caucheteux & King, 2022) that reported a left-right asymmetry using a single model. To investigate this issue further, we performed new analyses that we intend to include as supplementary figures. They demonstrate that while the average r-scores (across model sizes), is related to the inter-subjects correlations (ISC), this is not the case for the slope in the relationship between brain correlation and model size.
Plots from Fig.1 in the rebuttal pdf display the left and right, model-free, inter-subjects correlations (ISC) in various brain regions from the Harvard-Oxford atlas. Some regions, especially in the language network, indeed show asymmetries. Fig. 2a, based on Fig. B5 of the current paper, shows ROI analyses with the addition of horizontal lines indicating the region’s ISCs. Fig. 2b displays the relationship between the average r-score and the ISC, and Fig. 2c the relationship between the slope of r-scores as a function of the number of parameters, and the ISC. Consider for instance aSTS and BA44: in the aSTS, the inter-subjects reliability is greater in the right hemisphere, and on average so is the r-score of the encoding models, but the slope in the relationship between brain correlation and the size of the models is greater in the left aSTS; conversely, in BA44, although the inter-subjects correlation is higher in the left hemisphere, which is reflected in location of the brain correlation of the models on the y-axis, the slopes are quite similar.
> The L-R difference does not seem qualitatively interesting, because both trends in Fig. 6a are straight lines. [...] If the left-hemisphere line were exponentially increasing instead, that may hint at a qualitatively interesting difference between larger vs smaller models. But instead, we see two straight lines. In fact, the L-R difference seems to already be present with models with 350M parameters, and the gap is only linearly widening as model size increases.
It is not clear to us why only nonlinear effects would be interesting. Perhaps the idea is that steps in the relationship between number of parameters and brain scores would reveal the sudden emergence of new capacities for the network (?). This is not what we observed, and in that sense, it means that we learn something from the data.
> For the growing L-R asymmetry result, there is no significant effect in many key regions of the language network (IFG, temporal pole) (Figures 4 and 7) (Lines 225-227). The strongest effects seem to occur in AG, which some consider not part of the language network [1]. This is a relevant weakness since this paper focuses on the language network.
5 out of the 7 ROIs actually show significant effects (and 2 of them are located in the IFG). Regarding the Angular Gyrus (AG), while it does not often show up in language minus control contrasts and is thus not included in the core language network by some researchers, it has long been known to be sensitive to linguistic operations, for example, semantic composition (Price et al. 2015 cited in our paper). This is the rationale why we included it in the a priori list of ROIs (before looking at the data!). We are not making a strong statement about what regions belong or not to the language network here, but we think that the inclusion of the Angular Gyrus is reasonable.
The fact that the slopes do not differ in a BA44 is a weakness only if one a priori considers that it should show an asymmetry. The role of BA44 is actually fiercely debated and some authors consider it to be involved in bilateral articulatory/phonological processes (Matchin & Hickok, 2020, Cereb. Cortex). So rather than a weakness, we take the BA44 result, if it is replicated, as an empirical fact to be explained.
> Because of inter-individual variability in the precise anatomical locations and sizes/shapes of functional areas in the language network, any voxel defined in a common anatomical space often corresponds to different functional areas across individuals. Hence, using group averages and anatomical ROI localization can lead to the blurring of neighboring areas and information loss
This remark is correct. The Little Prince fMRI dataset does not contain a language localizer that would have permitted us to identify the most sensitive voxels at the individual level. However, given the size of our ROIs and the smoothness of the data we are working with, our experience is that selecting individual voxels from a functional localizer has very little impact. Without doubt, ROIs analyses, by averaging over voxels with different functional profiles, blur the results. Actually, any kind of averaging between subjects has this issue. In our opinion, the ROIs analyses are still a worthy complement to a voxel-based view, that we present on Fig.4.
> Hyperlinks to appendix figures do not work correctly. E.g., "Fig. B.1" (Line 162), also: Lines 173, etc. These are the hyperlinks that start with "Fig B."
Thanks for pointing this out, it is corrected in the revised version.
---
Rebuttal Comment 1.1:
Comment: I raised my score from 4 to 5. Thanks for the clarifications and new rebuttal pdf results. The authors provided convincing clarifications for some of the weaknesses (and their sub-points) I raised.
1. The paper does not provide sufficient experimental/theoretical analyses into possible reasons for growing L-H asymmetry, nor an adequate attempt into possible explanations/interpretations (unresolved)
2. Linearly growing L-R asymmetry may simply reflect L-R differences in SNR (partially resolved)
- In my review, I wrote that straight lines in Fig. 6a (x-axis: model size, y-axis: brain correlation) may not be interesting, even though the LH slope is steeper than the RH slope. In particular, I was thinking that this may simply reflect LH having a higher SNR / noise ceiling than RH. In this case, if you changed the y-axis to the noise-normalized brain correlation (i.e., brain correlation divided by respective noise ceilings), the LH and RH slopes would be equal. Here, the growing asymmetry simply reflects that LH activity is easier to predict, rather than an interesting difference between larger vs smaller models.
- Put another way, if you had two datasets where one had a higher noise ceiling, a "growing asymmetry" or difference in model-size-vs-brain-correlation slopes between the two datasets is not interesting if it is simply due to the difference in noise ceilings.
- Even after looking at the new rebuttal pdf results, I am unsure if the L-H difference in SNR fully or only partially explains the L-H asymmetry in slopes.
3. Some regions in the language network do not show a significant effect (resolved)
- Thanks for the clarification.
4. Anatomical rather than functional localization, and group-level analysis (mostly resolved)
- Thanks for the clarification.
---
Reply to Comment 1.1.1:
Comment: Thank you for your constructive feedback.
> In this case, if you changed the y-axis to the noise-normalized brain correlation (i.e., brain correlation divided by respective noise ceilings), the LH and RH slopes would be equal.
Maybe we misunderstand something here. If one considers the inter-subjects correlation (ISC) as a proxy for noise ceiling, then it is a constant from the data that does not depend on the model, in particular in the number of parameters. In that case, dividing the brain correlations by the ISC would affect the global position on the y-axis but not the ratio between the slopes.
> I am unsure if the L-H difference in SNR fully or only partially explains the L-H asymmetry in slopes.
In our response to reviewer d74H, we report a new analysis that goes against the idea that the growing left-right asymmetry in brain correlation is explained by an asymmetry in inter-subjects correlation. We hope that it answers your concern. | Summary: The authors investigate whether larger-parameter language models better predict left versus right hemisphere brain responses (recorded during listening of a naturalistic story, via fMRI), motivated by left-lateralized processing of language in most individuals. They indeed find that larger models better predict left hemisphere brain responses, compared to right hemisphere responses. In addition, the authors have two separate findings that mainly replicate prior work: larger models better predict brain responses, and the prediction performance correlates with the language model's ability to perform natural language tasks.
Strengths: - The primary finding of the paper, that larger language models better predict left hemisphere brain responses, is novel.
- The paper is well-written and well-cited. The structure of the paper is intuitive, making it easy to follow.
- The paper has a good number of control experiments.
- The paper provides nice replication of former work in this domain.
Weaknesses: - The paper averages across different individual's brain responses (n=48) in the template brain space. Individual activations to language differ from participant to participant, and there is no direct voxel-to-voxel correspondence (see e.g., https://pubmed.ncbi.nlm.nih.gov/32160565/). Hence, it is somewhat unclear whether the primary finding of brain-model lateralization can be explained away by methodological issues. Imagine that individuals generally have a higher voxel-to-voxel correspondence in the left hemisphere compared to the right -- would it be true that large models, given higher expressivity, would be able to better predict those left-hemisphere voxels? (see Questions for more specific questions).
Technical Quality: 3
Clarity: 4
Questions for Authors: - Pertaining to the issue of group-level averaging, is it true that the left-hemisphere generally has higher reliability? Was that controlled for in any way in the analyses (as far as I understand, the authors analyze the top 25% most reliable voxels, yielding approximately a similar number of voxels in both hemispheres (~3K). What are the average reliability values per hemisphere? If the authors subset left and right hemisphere voxels such that they are matched on reliability, are the primary model-brain lateralization findings still valid?
- Are the performance vs. brain correlations (Figure 5) equally strong in both hemispheres? Do the left hemisphere brain activations contain more information about task, relative to the right ones?
- Can the authors clarify what they mean by "The right hemisphere is usually “hidden” because it is inhibited by the left hemisphere in healthy people"? (line 285)?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors do discuss limitations briefly, but I am not sure I agree with the conclusions that follow. For instance, in lines 299-300 the authors mention the group-level approach, but conclude that assessing inter-individual variability would require "However, this would require a random sample of the population, not only right-handed participants". Why is that true? Inter-individual variability can still be investigated in right-handed individuals? It is true that it would be useful to assess the hemispheric dominance of individuals -- that could be done by e.g., assessing the consistency of brain responses within an individual, across runs (other approaches exist as well).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. Here is a point-by-point response to your comments.
> Pertaining to the issue of group-level averaging, is it true that the left-hemisphere generally has higher reliability? Was that controlled for in any way in the analyses (as far as I understand, the authors analyze the top 25% most reliable voxels, yielding approximately a similar number of voxels in both hemispheres (~3K). What are the average reliability values per hemisphere? If the authors subset left and right hemisphere voxels such that they are matched on reliability, are the primary model-brain lateralization findings still valid?
Thanks for bringing this up. The description of the data itself deserves more attention, and the question of whether the effect is simply due to a difference in the left-right signal-to-noise ratio is an important one. We will discuss these two aspects in more detail in the camera-ready version, along with the addition of two new figures shared in the rebuttal pdf that show new analyses.
First, with respect to the question of group averaging, we show that the left hemisphere does indeed have a higher reliability in general (see Fig. 1 of accompanying pdf), although the inter-subjects reliability follows the diagonal in left vs. right plot as a general trend. Now, importantly for the discussion, this could explain why for a single encoding model there is a better correlation in the left than in the right hemisphere, but this could not explain the left-right difference scaling law as a function of the number of parameters, since the quality of the data is a constant across models. In short, Fig. 2 shows that (i) the mean brain correlation follows the reliability of the signal in the different regions of interest, but that (ii) the slopes do not generally follow such a trend. This point is also discussed at length in another reviewer thread.
> Are the performance vs. brain correlations (Figure 5) equally strong in both hemispheres? Do the left hemisphere brain activations contain more information about task, relative to the right ones?
The relationships between performance (hellasawag or perplexity) and r-scores, split by hemisphere, show the same increasing left-right difference as the version with (log of) number of parameters on the x-axis, that is, the better the performance, the more marked the asymmetry. These graphics will be added as supplementary figures of the paper.
More generally, we think that it is important to further investigate the relationship between the performance on a given task (to name but a few: syntactic comprehension, sentiment analysis, social understanding) and the difference in brain score in a given hemisphere or region. We are working on this, but this goes well beyond the present paper.
> Can the authors clarify what they mean by "The right hemisphere is usually “hidden” because it is inhibited by the left hemisphere in healthy people"? (line 285)?
This affirmation refers to the concept of interhemispheric inhibition according to which one hemisphere can prevent concurrent processing by the opposite hemisphere. It is one of the theories proposed to explain the capacity of the right hemisphere to take over linguistic functions in aphasic patients with left lesions (for example, Tzourio-Mazoyer et al., 2017, cited in our paper, wrote: “In typical brains, inter-hemispheric inhibition, exerted from the LH to the RH, permits the LH to maintain language dominance. In pathological conditions, inter and intra-hemispheric inhibition is decreased, inducing modifications on the degree of Hemispheric Specialisation and of language networks.”). We will tone down this assertion to make it clearer that it is an hypothesis.
> The authors do discuss limitations briefly, but I am not sure I agree with the conclusions that follow. For instance, in lines 299-300 the authors mention the group-level approach, but conclude that assessing inter-individual variability would require "However, this would require a random sample of the population, not only right-handed participants". Why is that true? Inter-individual variability can still be investigated in right-handed individuals? It is true that it would be useful to assess the hemispheric dominance of individuals -- that could be done by e.g., assessing the consistency of brain responses within an individual, across runs (other approaches exist as well).
It is true that interindividual variability could be investigated only in right-handed participants. What we had in mind is that if one wanted to assess if the LLMs correlate more strongly with the dominant hemisphere of humans in general, it would make sense to generalize to the whole population, rather than a sub-population. Anyway, because this issue is not central to the paper, we will remove the sentence. A more relevant focus for this section is “what could we learn from individual analyses?”, and we will expand on that.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for addressing my comments, in particular, for running the analyses in Figure 2. I think the claims need to be rephrased slightly by taking those analyses into account. I still struggling to make sense of Figure 2B. Usually, larger models also perform better on the tasks (which also correlates with brain scores, as you demonstrate). Then why would it not be the case that the number of parameter trend holds up? Any speculations are welcome. Thanks again!
---
Reply to Comment 1.1.1:
Comment: Thank you for your constructive feedback. We will include and discuss these new analyses.
> Then why would it not be the case that the number of parameter trend holds up? Any speculations are welcome.
We understand the concern. The relationship between inter-subjects correlation (ISC) and performance of encoding models is indeed complex and subtle.
We produced a new plot that might answer your concern (which we unfortunately cannot share here, but will include in supplementary figures). This plot shows, for each (pairwise symmetric) parcel of the Harvard-Oxford atlas, the left-right difference in inter-subject correlation on the x-axis, and, on the y-axis, the slope of the left-right difference in brain score as function of model size (this corresponds to the slope of the line displayed in Fig. 6b, computed over the whole brain in this figure, but computed now for each parcel) . The verdict is that there is no relationship between the two quantities (r = -0.1; p=0.52). | Summary: Recent studies on language processing in the human brain using fMRI data and language model embeddings have shown that both hemispheres are involved in language processing although many previous studies have indicated a left lateralization. The authors aim to reconcile these findings. They use embeddings from language models of different sizes to predict naturalistic fMRI data and show that as the number of parameters or performance of the models increases, the respective embeddings are more predictive of activity in the left hemisphere vs. in the right one. They also show that this pattern holds in various regions of interest.
Strengths: - Originality: The main novel contribution of this work is showing the clear dependence between model capacity or performance and predicted left lateralization of language processing. Many previous studies, including some that have been cited, demonstrate left lateralization to a certain extent (ex: Fig 3 from Cauchetaux and King (2022)) and have shown that predictivity scales with language model capacity (ex: Fig 2 from Schrimpf et al. (2021)). However, I appreciate the thoroughness and clarity of the authors’ analyses and believe that it provides additional evidence to back a key finding in neuroscience.
- Quality: Experiments performed to support claims are very thorough and well-motivated.
- Clarity: The paper is well-written and easy to understand. One should be able to reproduce the results presented using the paper and associated code.
- Significance: As mentioned above, although previous studies have shown left lateralization to a certain extent and that predictivity scales with language model capacity, I believe that this work is still important since it clearly shows that more left-lateralization emerges as the models are scaled up.
Weaknesses: This is a strong submission in general. I have the following suggestions/questions to improve it:
1. The second paragraph of the introduction (and the abstract) mentions that many recent works have discovered strikingly symmetric brain maps for language processing. To the best of my knowledge, many of these studies conclude that both hemispheres are involved in language processing without assessing left-lateralization since other studies had contended the role of the right hemisphere altogether. If one looks at their figures (ex: Fig 2 from Toneva and Wehbe (2019) or Fig 3 from Cauchetaux and King (2022)), the left hemisphere is generally predicted better than the right hemisphere. Therefore, I think that this paragraph can be modified to better reflect these results.
2. I understand the need for reducing the computational burden of the study and the nuances mentioned in the limitations section. However, for a subset of the participants, it would be great to see individual participant-level results to make an even more convincing argument. This is mostly to rule out any artifacts associated with using an average subject and not to make inferences about inter-individual variability.
3. For encoding any given word, are the embeddings computed by only inputting the previous words or using the full sentence/text? In other words, are the embeddings capturing future information due to the full sentence/text being encoded at once? If they are capturing future information, that is at odds with how the data was collected since words were sequentially presented to participants.
Technical Quality: 3
Clarity: 3
Questions for Authors: I have mentioned my questions and suggestions in the Weaknesses section above.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have identified the limitations of their analyses and I do not foresee any negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. Here is a point-by-point response to your comments.
> The second paragraph of the introduction (and the abstract) mentions that many recent works have discovered strikingly symmetric brain maps for language processing. To the best of my knowledge, many of these studies conclude that both hemispheres are involved in language processing without assessing left-lateralization since other studies had contended the role of the right hemisphere altogether. If one looks at their figures (ex: Fig 2 from Toneva and Wehbe (2019) or Fig 3 from Cauchetaux and King (2022)), the left hemisphere is generally predicted better than the right hemisphere. Therefore, I think that this paragraph can be modified to better reflect these results.
Thanks for pointing that out. We apologize for missing these passages and will modify our introduction accordingly.
> I understand the need for reducing the computational burden of the study and the nuances mentioned in the limitations section. However, for a subset of the participants, it would be great to see individual participant-level results to make an even more convincing argument. This is mostly to rule out any artifacts associated with using an average subject and not to make inferences about inter-individual variability.
We agree this is an important point, but we will not be able to conduct this work in the time-frame. We have modified our paragraph in the Limitations section to better acknowledge the interest of performing the same analyses at the individual level, which we will do in the future.
> For encoding any given word, are the embeddings computed by only inputting the previous words or using the full sentence/text? In other words, are the embeddings capturing future information due to the full sentence/text being encoded at once? If they are capturing future information, that is at odds with how the data was collected since words were sequentially presented to participants.
Indeed, the embeddings are computed autoregressively, taking into account only the past of a given word, and not future information.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response! My concerns have been addressed but it would be great if the authors could include the individual level analyses in the final version. I keep my original score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your constructive feedback.
> it would be great if the authors could include the individual level analyses in the final version
The full pipeline takes about five days. Before the deadline, we will try and pick a few subjects and maybe a subset of models to check the validity of the results at the individual level. | Summary: The paper studies whether encoding fits with LLMs onto fMRI data can be used to find left lateralization, the idea that language is lateralized to the left hemisphere. This is a well-known property of language localization in many humans. They also show that increasing model size improves fit from their encoding models.
Strengths: * A new fMRI dataset that can serve as additional analysis for LLMs.
* Incorporation of a large family of LLMs never studied before in previous work including LLaMA-2, Mamba, etc.
* Likely the first (to my knowledge) to directly test the correspondence between LLMs and lateralization.
Weaknesses: * Although the motivation of this paper is reasonable, I think the presentation can be significantly improved.
* The paper should make an argument about why lateralization is an important property of brain fits with LLMs. I think this was missing.
* A small nitpick: From my understanding, I can’t see any data from right hemisphere regions in Schrimpf et. al. (2021) [1]. The paper is very careful in acknowledging left-lateralization (see Page 2 of the Schrimpf paper).
* The introduction should present some more details on the setting. What language models are used? What data? This is covered later but is missing from an introduction that should introduce more of the paper. Also the introduction doesn’t even mention the other contribution of the paper on model size correlating with neural fit.
* The novelty and takeaways of this paper are unclear to me.
* First, the paper claims that larger language models have better fits. Schrimpf et. al. (2021) [1] already establishes that. This is acknowledged in the conclusion/discussion section. The authors also refer to Antonello et. al. 2024 which seems to establish this as well. How would the authors characterize the difference in the takeaway of their paper vs the prior takeaway? I’m not particularly satisfied with the argument that much larger models were used or that a logarithmic scaling law was identified.
* Caucheteux and King [2] discuss lateralization at length in their paper. Refer to page 3 where they show that language embeddings have significantly better left-lateralization over right-lateralization (R±0.01, p < 10^-14). This may not use the controls presented in this paper but still points to the same result.
* I think both of these need to be discussed at more length. The contribution of this paper should be better characterized. I don’t see the novelty of this work even in regards to lateralization.
* Baselines
* The paper doesn’t consider the baseline comparison with randomly initialized models. Why? I think this is a very important baseline for characterizing architectural bias and this is done in previous work.
[1] Schrimpf et. al. The neural architecture of language: Intergrative modeling converges on predictive processing. PNAS 2021.
[2] Caucheteux et. al. Brains and algorithms partially converge in natural language processing. Nature communications, 2022.
Technical Quality: 3
Clarity: 1
Questions for Authors: * Why did you also include Mamba? I’m not sure it adds anything or detracts anything but I’m just curious about the choice.
Confidence: 4
Soundness: 3
Presentation: 1
Contribution: 1
Limitations: I believe all reasonable limitations were addressed with regards to this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. Here is a point-by-point response to your comments.
> The paper should make an argument about why lateralization is an important property of brain fits with LLMs. I think this was missing.
Studies correlating word embeddings or LLMs activations and fMRI data have produced strikingly bilateral results. One potential explanation is that brain scores are essentially driven by semantic representations which are supposedly represented in a very distributed fashion across both hemispheres. Our work shows that the symmetrical results are due to the use of models with small numbers of parameters and that when the number of parameters increases, the left-right asymmetry becomes larger and larger. This non obvious empirical observation opens several questions for further investigations, notably what aspects of large LLM relative to small ones make them more asymmetrical.
> A small nitpick: From my understanding, I can’t see any data from right hemisphere regions in Schrimpf et. al. (2021) [1]. The paper is very careful in acknowledging left-lateralization (see Page 2 of the Schrimpf paper).
Schrimpf et al (2021) show results from both hemispheres in Fig. S3 of their supplementary materials. Responses on both sides are remarkably similar and the caption states that "The distribution of predictivity values across the language-responsive voxels [...] are similar across regions, and between the LH and RH components of the network". The authors do not report any quantitative comparison between hemispheres. While they acknowledge the well-known fact that language is left lateralized, surprisingly, they did not discuss the lack of lateralization in their results. Our work replicates their findings with small models but shows that it breaks down with larger models.
> The introduction should present some more details on the setting. What language models are used? What data? This is covered later but is missing from an introduction that should introduce more of the paper.
The introduction only contained sparse information about the exact list of language models that are used, but all such information is provided in great details both in the Methods section (that follows the Introduction) and the Appendices. As the final camera-ready version of the paper allows for one extra page, we will be able to present more details as suggested.
> First, the paper claims that larger language models have better fits. Schrimpf et. al. (2021) [1] already establishes that. This is acknowledged in the conclusion/discussion section. The authors also refer to Antonello et. al. 2024 which seems to establish this as well. How would the authors characterize the difference in the takeaway of their paper vs the prior takeaway?
Our discussion simply mentions that we replicate these previous findings and devotes more space on the growing left-right asymmetry which is a genuine novelty.
> Caucheteux and King [2] discuss lateralization at length in their paper. Refer to page 3 where they show that language embeddings have significantly better left-lateralization over right-lateralization (R±0.01, p < 10^-14). This may not use the controls presented in this paper but still points to the same result.
Thank you for pointing to the left-right test by Caucheteux & King (2022), which we had missed. Note that although they test it, they do not really discuss nor comment on it in their paper.
Nevertheless, we believe that our paper goes well beyond showing a left-right difference in a single model. Given one model, if the data has higher signal-to-noise ratio in the LH, one expects an encoding model to better fit the LH compared to the RH. This is in fact the case in the Caucheteux & King (2022; Fig. 2d).
It is a legitimate concern that if the signal-to-noise is higher in the left hemisphere, then the brain correlations, as predicted by the encoding models, should also be higher in general. But, given that the data is constant across models, it is unclear why this would affect the slope in comparing brain correlations and number of parameters. We did some extra analyses (see pdf) showing that, although the mean brain correlation depends on signal-to-noise ratio, the slope cannot be explained by this factor.
> The paper doesn’t consider the baseline comparison with randomly initialized models. Why? I think this is a very important baseline for characterizing architectural bias and this is done in previous work
Fig. 2a of the submission presents the distributions of r-scores from various models, including distributions from random (fixed) embedding models. Pasquiou et al (2022, ICML) have shown that this type of model yields brain correlations which are as strong or stronger than untrained language models. Nevertheless, following the reviewer’s comment, we computed the brain scores for untrained versions of the four variants of GPT-2. We replicate Pasquiou et al.’s observation, that is, all four untrained models performed below the 1024d random baseline which achieved an average brain score of 0.182 (on the 25% most reliable voxels). For example, gpt2-medium, which has the same number of dimensions, yields a brain correlation of 0.168 (vs. 0.417 for the corresponding trained model). While the left-right score difference increases with the number of parameters (r=0.99, p=8.3e-3), the relationship breaks down and is flat in the case of the untrained models (r=-0.03, p=0.97).
> Why did you also include Mamba? I’m not sure it adds anything or detracts anything but I’m just curious about the choice.
Mainly out of curiosity. It is the first time that a large language model not based on the Transformer architecture has competitive performance on par with more traditional Transformer-based language models. As discussed in the Results section, the Mamba family has similar performance as encoding models of brain functional data, which was not a priori obvious.
---
Rebuttal 2:
Comment: Thank you for the detailed response and clarification. This was very useful for me to understand the positioning of this paper. Although this may be my own personal opinion, I think it would be great to characterize this contribution through the lense of what you wrote in this response. I also really appreciate your response on why lateralization was important! My main feedback would be to provide some more framing on the connection of increased lateralization with increasingly large language models.
In general, I also really like the randomly initialized result. This gives me confidence in conclusions drawn in the paper. I understand that prior papers may have consistently shown the same result but I would strongly push for this baseline to properly characterize the role of linguistic learning in a camera ready version of the paper.
In light of the response, I will raise my score to reflect my better understanding of the contribution.
---
Rebuttal Comment 2.1:
Comment: Thank you for your constructive feedback. We will try to better clarify our position in the camera-ready paper.
> I would strongly push for this baseline to properly characterize the role of linguistic learning
We will include these analyses in the final version of the paper.
Our interpretation for the fact that baselines with untrained contextual models perform less well than fixed random embeddings is the following. The linear regression to fit the encoding models to brain data may learn some association between the (random) embedding associated with a given word and the brain activation elicited by this word. Contextual models (transformers, RNN, …) would then perform less well than fixed embedding models as the context introduces noise in the activation pattern associated with a given word. | Rebuttal 1:
Rebuttal: We would like to sincerely thank all four reviewers for their detailed and valuable reviews of our manuscript. We think they have helped to clarify our contribution and strengthen our paper by adding some checks, as described below.
An important point, raised by most reviewers, concerns the potential impact of differences in signal-to-noise ratio in the right and left hemispheres. Indeed, in regions where the signal-to-noise ratio is stronger, one intuitively expects higher correlations between brain data and model predictions. Could this explain away our findings?
To address this issue, we conducted new analyses and generated figures in the accompanying pdf.
Fig.1 provides more details about inter-subjects correlations (model free) in the left and right hemispheres. In a nutshell, the inter-subjects correlation (ISC) follows the diagonal in left vs. right plot as a general trend, but several regions in the left hemisphere do indeed show higher reliability.
Fig. 2 shows that in the different regions of interest the average brain correlation follows the reliability of the signal (ISC), but the slope in the relationship between brain correlation and number of parameters do not generally follow such a trend.
So, differences in signal-to-noise ratio partly explain the mean brain score difference between left and right, but do not explain the increase in the difference. In other words, while a signal-to-noise ratio difference may explain an asymmetry in a single model, it does not account for the fact that larger models fit better and better left hemispheric activations than right hemispheric ones.
In addition, since the original submission, we have had time to extend the work to Chinese and French data from the Le Petit Prince dataset. Even though we found fewer pretrained large language models available for these two target languages, the analyses essentially reproduce the results obtained with English, in particular with respect to the scaling law in the left-right difference in brain correlation. We intend to add these additional results to the camera-ready paper.
Pdf: /pdf/fc1b1a6fc17dc7f4ee9c375a830f18eb7820039a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Textual Training for the Hassle-Free Removal of Unwanted Visual Data: Case Studies on OOD and Hateful Image Detection | Accept (poster) | Summary: This paper proposes a purely textual-based training method for detecting out-of-distribution or hateful images. The authors train an additional embedding layer over frozen CLIP encoders using text data. The authors propose the use of a novel loss function for this training. The method improves performance over baselines for most of the datasets that they evaluate on.
Strengths: Using only textual data for hate detection training is an interesting approach as it eliminates the need to source hateful images or ethical issues of creating paired image-caption datasets of hateful scenarios. The authors provide sufficient experimentation and ablation analysis to validate their claims that only training on text data is enough to identify OOD or hate content. Implementation details are listed and code is provided for reproducibility.
Weaknesses: The writing structure could be improved. The textual synthesis aspect is not clear to me and should be discussed in more detail in the main paper. The method could also be used for generic image classification tasks which brings me to question the choice of why such classification results have not been shown. Some more discussion over the baselines would be good too.
Technical Quality: 3
Clarity: 3
Questions for Authors: Clarifications to my questions in weaknesses section is sufficient
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have described limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Firstly, we thank you for your thorough review of our paper. We particularly appreciate your recognition of our innovative approach using only textual data for hate detection, which eliminates the need to source hateful images and addresses ethical concerns. Additionally, we value your acknowledgment of our comprehensive experimentation, ablation analysis, and the provision of implementation details and code for reproducibility.
We are also immensely grateful for your excellent suggestions that can enhance the contents of our paper. We will address each of your concerns in detail in the following:
>**Q1**. The writing structure could be improved. The textual synthesis aspect is not clear to me and should be discussed in more detail in the main paper.
**A1**. We apologize for any inconvenience caused in understanding our paper. In our method, the purpose of textual data synthesis is to emulate the entire visual dataset for training. We assumed that using prompt templates, such as "a photo of a {}", to generate text data could effectively emulate visual data, based on observations of high CLIP zero-shot classification performance with similar prompts. Therefore, our textual data synthesis involves simply inserting numerous words into the prompt "This is a photo of a {}". In our experiments, we used approximately 370k predefined English words. Our results demonstrate that even this lightweight textual data synthesis method can produce a detector capable of identifying various types of unwanted images.
>**Q2**. The method could also be used for generic image classification tasks which brings me to question the choice of why such classification results have not been shown.
**A2**. In fact, our proposed method can always perform generic image classification and unwanted data detection simultaneously. Figure 1 in our paper or the figure in our repository (https://github.com/HFTT-anonymous/HFTT) can aid in understanding this. HFTT utilizes the input embeddings and logit values as they are for zero-shot classification while simultaneously estimating the probability that an input sample is unwanted. For example, our method can be applied to solve a 1001-class classification task consisting of 1000 ImageNet classes plus one OOD class.
However, applying our method does not enhance the zero-shot image classification performance of VLMs. This is because our method focuses solely on optimizing trainable embeddings for out-distribution detection while keeping CLIP frozen. Consequently, the image classification performance of VLMs remains the same as their zero-shot classification performance even after applying our method. Hence, we did not present classification results. For instance, the ImageNet classification accuracy of CLIP-ViT-B/16 remains at 68.6%, regardless of whether our method is applied.
>**Q3**. Some more discussion over the baselines would be good too.
**A3**. Thank you for your suggestion. We provide further discussion on the strongest baselines and present results for various foundation models here:
**Comparison with SOTA OOD detection methods**
NegLabel [1] and CLIPN [2] are cutting-edge OOD detection methods.
NegLabel constructs an OOD corpus by selecting texts distant from the in-distribution texts from a predefined corpus, then compares the distances between the input image and those texts in the CLIP embedding space to detect OOD. While NegLabel shows high OOD detection performance on ImageNet (see Table B), it has the following limitations compared to our method:
- Although NegLabel does not require training additional parameters, it must compute the embeddings of all texts in the corpus and measure their similarity to the in-distribution texts to find the optimal OOD corpus for a given in-distribution. Our training method also requires nearly the same cost as obtaining the embeddings of all texts within a predefined corpus and calculating the similarities between those embeddings and the task+trainable embeddings, as discussed in Section 4.1. Thus, NegLabel and our method require the same level of optimization cost.
- Since NegLabel uses the embeddings of the determined OOD corpus as they are, it falls behind our method, which has trainable parameters, in terms of generalization. To demonstrate this, we further compare our method and NegLabel in the medical image domain. Specifically, we treat the ISIC-18 skin lesion diagnosis dataset [3] as in-distribution and the PathVQA [4] and PatchCamelyon [5] datasets as out-of-distribution. The ISIC-18 skin lesion diagnosis dataset is an image classification benchmark for seven skin disease categories.
Table A: OOD detection in the medical image domain
|OOD:|PVQA||PCAM||
|-|-|-|-|-|
|**Method**|**FPR** ↓|**AUROC** ↑|**FPR** ↓|**AUROC** ↑|
|NegLabel|37.44|94.11|48.07|94.86|
|CLIPN|35.47|84.64|3.10|98.76|
|HFTT (ours)|13.72|97.05|4.95|98.35|
Table A illustrates the limitations of NegLabel in terms of generalization. While NegLabel fails to construct an effective OOD corpus for the medical image dataset, our method achieves significantly higher performance by learning optimal embeddings for detection.
CLIPN utilizes an additional "no" text encoder alongside CLIP. This additional text encoder predicts the probability that a given object is not present in an image. Thus, CLIPN predicts whether a given image is in-distribution or out-distribution by using the original CLIP text encoder and the "no" text encoder to estimate the probabilities, respectively. Images with a low probability of being in-distribution and a high probability of being out-distribution are identified as OOD.
Although CLIPN achieves high OOD detection performance on ImageNet (see Table B), it has the following limitations compared to our method:
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal. The rebuttal answers my concerns, thus I am raising my score. I would suggest the authors to include these discussions in the paper. | Summary: This paper introduces an efficient and effective text-only training method for detecting undesired visual content. Its key contributions include a theoretical demonstration of how text data can substitute visual data, a new loss function, and a method for synthesizing textual data. These efforts aim to segregate data from one mode using only the dataset from the other mode. The method is evaluated through experiments on OOD detection tasks and hateful image detection, demonstrating comparable performance.
Strengths: 1. The utilization of text-only mode for detecting data in the other mode is a compelling approach, supported by existing research in various domains.
2. The training and inference processes are efficient, with a small number of trainable parameters and minimal additional computational costs during inference.
3. Despite the experimental results not consistently outperforming other methods, the high efficiency (no use of image datasets during training) can offset this limitation.
Weaknesses: 1. The trainable embedding learning process is unclear. According to the description, trainable embeddings are learned for N out-distribution data instances, which are then frozen during the test phase. It appears that the image embeddings obtained by the image encoder align with the frozen embeddings. If this is the case, the alignment process should be clearly elucidated. If not, the method for using the learned embeddings for images needs to be explained. The parameter N, representing the number of out-distribution data used during training, intuitively suggests that a larger N would enhance accurate distribution learning. However, the results in Table 8 present a contradictory viewpoint. More explanation should be given.
2. The proposed HFTT method can be applied to different pretrained models and OOD tasks. In addition to the image CLIP model, it is recommended that the authors conduct experiments using pretrained video-text models to assess performance changes in related video detection tasks. This additional experiment can shed light on how pretrained models influence performance when only one mode is utilized for learning, considering the crucial role of pre-learned cross-modal alignment knowledge in enabling text-only learning.
Technical Quality: 3
Clarity: 3
Questions for Authors: My main questions are (1) making the trainable embedding learning clearer and (2) the test on other pre-trained models to view how the pre-learned cross-modal alignment knowledge impacts the text-only learning.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The main limitations of this paper also lie in the two concerns: unclear description of the trainable embedding learning and no examination of other pre-trained vision/video-text models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Firstly, we thank you for your thorough review of our paper. We particularly appreciate your recognition that our method is the compelling approach of utilizing text-only mode for detecting data in other modes and that the training and inference processes are highly efficient. We are also immensely grateful for your excellent suggestions that can enhance the contents of our paper. We will address each of your concerns in detail in the following:
>**Q1**. The trainable embedding learning process is unclear. According to the description, trainable embeddings are learned for N out-distribution data instances, which are then frozen during the test phase. It appears that the image embeddings obtained by the image encoder align with the frozen embeddings. If this is the case, the alignment process should be clearly elucidated. If not, the method for using the learned embeddings for images needs to be explained.
**A1**. In Section 3.1 of our paper, we presented a motivating example demonstrating that a classifier obtained using only textual data in the output space of a model like CLIP, where images and texts are well-aligned, can also be applied to image data. Based on this, we proceed as follows:
1. We define trainable embeddings in the output space of CLIP that serve as the parameters of a classifier distinguishing between in-distribution and out-distribution data.
2. We train these embeddings using only textual data.
3. We use these trained embeddings to detect unwanted images.
In summary, the model parameters of CLIP remain fixed, and our trainable parameters are defined and trained within CLIP’s (image-text) joint embedding space. Thus, our method does not require any additional alignment process.
>**Q2**. The parameter N, representing the number of out-distribution data used during training, intuitively suggests that a larger N would enhance accurate distribution learning. However, the results in Table 8 present a contradictory viewpoint. More explanation should be given.
**A2**. We apologize for any confusion caused by our notation. The "N" in Table 8 refers to the number of trainable embeddings, not the number of out-distribution data. The number of trainable embeddings in our method represents the complexity of the classifier that distinguishes between in-distribution and out-distribution data. If the data in CLIP's output space form a very low-dimensional manifold, it would be possible to distinguish between in- and out-distribution data with a small number of trainable embeddings. As we can see in Table 8, a large "N" is not necessary for high unwanted image detection results. Indeed, various studies support the notion that deep learning models possess low-dimensional data manifolds [1,2].
[1] Ansuini et al. "Intrinsic dimension of data representations in deep neural networks." NeurIPS 2019.
[2] Moayeri et al. "Text-to-concept (and back) via cross-model alignment." ICML 2023.
>**Q3**. The test on other pre-trained models to view how the pre-learned cross-modal alignment knowledge impacts the text-only learning.
**A3**. Thank you very much for suggesting an expansion of our research domain. Video embedding models and detection tasks are less established than those in the image domain. We will investigate the experimental environment further and include results in the next version of our paper. Here, we provide the results for CLIP-L/14, BLIP-B/16, and BLIP-L/16 in addition to the CLIP-B/16 used in our study. Tables A, B, and C demonstrate that our method is effective across various vision-language models.
Table A: The experimental results on CLIP-L/14 (in-distribution: ImageNet)
|OOD:|iNaturalist||SUN||Places||Texture||NINCO||
|-|-|-|-|-|-|-|-|-|-|-|
|**Method**|**FPR** ↓ |**AUROC** ↑|**FPR** ↓|**AUROC** ↑|**FPR** ↓| **AUROC** ↑ |**FPR** ↓|**AUROC** ↑|**FPR** ↓|**AUROC** ↑|
|MSP|26.66|94.20|22.37|94.37|36.82|92.45|52.83|86.57|67.27|78.70|
|Energy|30.84|91.25|25.94|94.10|32.94|92.30|64.33|79.26|63.49|79.72|
|MaxLogit|32.76|90.96|26.48|92.96|**31.88**|92.39|72.08|73.85|**60.67**|**81.07**|
|MCM|64.41|26.96|94.19|22.77|94.37|36.74|92.44|52.66|86.56|68.16|78.65|
|HFTT (ours)|**24.10**|**94.58**|**17.80**|**95.39**|33.83|**93.09**|**52.06**|**86.58**|69.19|78.98|
Table B: The experimental results on BLIP-B/16 (in-distribution: ImageNet)
|OOD:|iNaturalist||SUN||Places||Texture||NINCO||
|-|-|-|-|-|-|-|-|-|-|-|
|**Method**|**FPR** ↓ | **AUROC** ↑ |**FPR** ↓ | **AUROC** ↑ |**FPR** ↓ | **AUROC** ↑ |**FPR** ↓ | **AUROC** ↑ |**FPR** ↓ | **AUROC** ↑ |
| MSP | 64.70 | 82.22 | 30.38 | 91.06 | 71.40 | 78.82 | 76.99 | 81.30 | **71.47** | 72.07 |
| Energy | 67.15 | 79.30 | 45.21 | 89.07 | 70.28 | 77.49 | 91.24 | 75.38 | 80.29 | **77.20** |
| MaxLogit | 69.57 | 75.44 | 69.57 | 71.19 | 69.86 | 76.26 | 93.55 | 60.31 | 88.58 | 56.37 |
| MCM| 64.41 | **82.29** | 30.21 | 91.05 | 70.53 | 79.32 | 75.84 | 81.55 | 71.56 | 72.02 |
| HFTT (ours) | **63.28** | 82.22 | **19.16** | **95.12** | **68.48** | **79.50** | **63.74** | **84.53** | 72.12 | 73.86 |
Table C: The experimental results on BLIP-L/16 (in-distribution: ImageNet)
| OOD: | iNaturalist || SUN || Places || Texture || NINCO ||
|-|-|-|-|-|-|-|-|-|-|-|
|**Method**|**FPR** ↓| **AUROC** ↑ |**FPR** ↓ | **AUROC** ↑ |**FPR** ↓ | **AUROC** ↑ |**FPR** ↓ | **AUROC** ↑ |**FPR** ↓ | **AUROC** ↑ |
| MSP | 51.20| 87.91| 22.37| 93.86| 61.63| 84.68| 64.85| 85.28| 65.96| 78.29 |
| Energy | 45.63 |87.23 |33.94 |90.29 |55.73 |85.91 |72.38 |82.16 |71.23| 77.49 |
| MaxLogit | 44.59| 86.94| 35.56| 86.45| **50.96**| **86.46**| 86.38| 71.22| 79.78| 67.59 |
| MCM| 50.75 |88.03 |22.34 |93.88 |60.88 |85.38 |64.71 |**85.39** |66.04| 78.32 |
| HFTT (ours) | **44.24**| **89.88**| **6.81**| **98.40**| 62.20| 84.16| **63.35**| 83.39| **64.82**| **80.46** |
---
Rebuttal Comment 1.1:
Title: replying to the response
Comment: Thanks for the response. It well addressed my concern-1. For my concern-2, they promise to include the results in their next version. So, I keep my rating Weak Accept. | Summary: This paper focuses on textual training methods to remove undesirable (such as biased or offensive) visual content and proposes a method for detecting unwanted visual content using only synthetic textual data to partition visual data. The classifier trained on textual content can be successfully transferred to visual content. The method consists of an objective function and a textual data synthesis method. The design of the loss does not require data annotation and the textual data synthesis method can emulate unknown visual data distribution into the training process with no extra cost. The proposed method was proven to be effective for out-of-distribution detection and hateful image detection.
Strengths: 1. This paper is clear and well-written.
2. The idea is innovative and solid with theoretical results. The proposed method is simple but effective and can be applied to even black-box foundation models.
3. The experiments are comprehensive and the results showed the effectiveness of this method.
4. The training and testing costs are discussed in this paper which could benefit the application of the proposed method.
Weaknesses: I do not see a significant weakness in the paper since it’s well-organized. Several clarification questions are listed below.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. This idea of training on textual content for transferring to visual content is interesting, could the authors provide more insights on more examples of applications of this method except for the OOD detection and hateful image detection? Is it possible to apply it to more binary classification tasks?
2. What encoders are used to obtain the task embedding? The authors mentioned using CLIP as the vision backbone, and is the text encoder during training also from CLIP?
3. How do you decide the number of task embeddings?
4. The paper mentions using the proposed method to detect unwanted content and uses two tasks (OOD detection and hateful image detection. If I understand correctly, my key concern is that the focus of this paper is closer to a novel method for binary classification with an application that classifies unwanted content. Could the authors share more thoughts regarding this? If so, I would suggest slightly paraphrasing the introduction to better reflect the contribution and indicate more potential applications.
5. Can this method be extended to multi-class classifications?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Firstly, we thank you for your thorough review of our paper. We particularly appreciate your recognition that our paper has no significant weaknesses and is well-organized. We have made our best efforts to address your remaining concerns as follows. If our responses meet your expectations, we would be grateful if you could consider reflecting this in your rating:
>**Q1**. This idea of training on textual content for transferring to visual content is interesting, could the authors provide more insights on more examples of applications of this method except for the OOD detection and hateful image detection? Is it possible to apply it to more binary classification tasks?
**A1**. Our method can be applied to more binary classification tasks. We additionally demonstrate the applicability of our method, HFTT, in detecting low-quality images, which are commonly unwanted visual data beyond OOD and hate images. Specifically, we assume the task of detecting corrupted images lurking within a raw visual dataset consisting of 1000 ImageNet classes. For this experiment, we use ImageNet and ImageNet-C. As shown in Table A below, HFTT consistently surpasses existing methods in the detection of corrupted images.
Table A: Corrupted image detection
|Method|FPR ↓|AUROC ↑|
|-|-|-|
|MSP|64.17|83.94|
|Energy|99.99|09.16|
|MaxLogit|78.01|68.47|
|MCM|51.54|89.06|
|HFTT (ours)|**42.13**|**92.81**|
>**Q2**. What encoders are used to obtain the task embedding? The authors mentioned using CLIP as the vision backbone, and is the text encoder during training also from CLIP?
**A2**. As discussed in Section 3.1 of our paper, our method assumes that text and image are well-aligned through contrastive learning, similar to CLIP. Therefore, if CLIP is used as the vision encoder, the text encoder must also be CLIP. If text and image are well-aligned, our method can be applied to models other than CLIP. To demonstrate this, we additionally apply our method to BLIP and observe the results. Table B shows that our method is also effective for BLIP.
Table B: The experimental results on BLIP (in-distribution: ImageNet)
|OOD:|iNaturalist||SUN||Places||Texture||NINCO||
|-|-|-|-|-|-|-|-|-|-|-|
|**Method**|**FPR** ↓|**AUROC** ↑|**FPR** ↓|**AUROC** ↑|**FPR** ↓|**AUROC** ↑ |**FPR** ↓|**AUROC** ↑|**FPR** ↓|**AUROC** ↑|
|MSP|64.70|82.22|30.38|91.06|71.40|78.82|76.99|81.30|**71.47**|72.07|
|Energy | 67.15 | 79.30 | 45.21 | 89.07 | 70.28 | 77.49 | 91.24 | 75.38 | 80.29 | **77.20** |
|MaxLogit |69.57 | 75.44 | 69.57 | 71.19 | 69.86 | 76.26 | 93.55 | 60.31 | 88.58 | 56.37 |
|MCM| 64.41| **82.29** | 30.21 | 91.05 | 70.53 | 79.32 | 75.84 | 81.55 | 71.56 | 72.02 |
|HFTT (ours)| **63.28** | 82.22 | **19.16** | **95.12** | **68.48** | **79.50** | **63.74** | **84.53** | 72.12 | 73.86 |
>**Q3**. How do you decide the number of task embeddings?
**A3**. The number of task embeddings is not a hyper-parameter but is determined by the in-distribution task. For example, if the in-distribution is ImageNet, the task embeddings are the text embeddings for the 1,000 ImageNet classes, such as ["a photo of a tench", ..., "a photo of a toilet paper"]. In the case of hateful image detection, we used representative hate phrases defined by the dataset as task embeddings. The number of trainable embeddings in our method is a hyper-parameter. We present an ablation study on this in Table 8 of Appendix C, and the results show that our method is not sensitive to this parameter.
>**Q4**. The paper mentions using the proposed method to detect unwanted content and uses two tasks (OOD detection and hateful image detection. If I understand correctly, my key concern is that the focus of this paper is closer to a novel method for binary classification with an application that classifies unwanted content. Could the authors share more thoughts regarding this? If so, I would suggest slightly paraphrasing the introduction to better reflect the contribution and indicate more potential applications.
**A4**. We are hesitant to consider our method as a novel approach for binary classification. Generally, binary classification is defined as the task of finding patterns that best distinguish between two different classes. However, in our scenario, we assume that only one (in-distribution) of the two classes has patterns, while we make no assumptions about the other class to cover all possible cases. Therefore, for binary classification tasks where both classes have specific patterns, our method may not lead to a good solution compared to methods that consider patterns of both classes. Thus, we would like to distinguish our method from general binary classification methods. We appreciate the reviewer for prompting us to view our method in this light.
>**Q5**. Can this method be extended to multi-class classifications?
**A5**. In fact, our proposed method can always perform multi-class classification and unwanted data detection simultaneously. Figure 1 in our paper or the figure in our repository (https://github.com/HFTT-anonymous/HFTT) can aid in understanding this. HFTT utilizes the input embeddings and logit values as they are for zero-shot classification while simultaneously estimating the probability that an input sample is unwanted. For example, our method can be applied to solve a 1001-class classification task consisting of 1000 ImageNet classes plus one OOD class.
However, applying our method does not enhance the zero-shot image classification performance of VLMs. This is because our method focuses solely on optimizing trainable embeddings for out-distribution detection while keeping CLIP frozen. Consequently, the image classification performance of VLMs remains the same as their zero-shot classification performance even after applying our method. For instance, the ImageNet classification accuracy of CLIP-ViT-B/16 remains at 68.6%, regardless of whether our method is applied.
---
Rebuttal Comment 1.1:
Comment: I appreciate the detailed response from the authors and all my concerns are addressed. I will increase my rating from 5 to 6 and suggest that the authors include the information in the revised paper. | Summary: This paper proposes an objective function for CLIP-based architecture to enhance out-of-distribution (OOD) detection. Instead of relying on OOD image data, the approach extracts OOD words from various sources and updates some trainable embeddings using predefined text embedding. Results show that the proposed approach outperforms previous methods that do not require in-distribution images and has comparable performance with methods that require in-distribution images.
Strengths: \+ This paper shows that is possible to learn OOD without the need of in-distribution images, which I think it is quite valuable and useful in practice
\+ The justification of the proposed idea seems well motivated and clearly presented
Weaknesses: \- The related work section lacks comprehensiveness. While the authors mention some CLIP-based methods, they overlook approaches closely related to their own. Notably, NegLabel (ICLR2024) and CLIPN (ICCV2023) also use textual features for OOD detection and are very similar to the proposed method. The authors should include these works in their related work section, provide benchmarks, and critically analyze the advantages of their approach compared to these existing methods.
\- In the experimental results, authors compare results with CLIPN and NegLabel only in Tab. 2, while they are not mentioned in Tab.1. Also, those two methods are not even cited nor explained in related work. I believe it might have been a last minute effort.
\- Some methods in the literature do not require any training (e.g. NegLabel). Given that the proposed approach involves additional training (even though without images), it is crucial to highlight the differences with these training-free methods.
Technical Quality: 3
Clarity: 3
Questions for Authors: Authors should explain why the comparison with NegLabel and CLIPN is partial and the method are not even cited by the paper.
In addition authors should highlight advantages and disadvantages of their method which is in-domain image free versus approaches that are training free, such as NegLabel.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Authors addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Firstly, we express our gratitude for your thorough review of our manuscript. Particularly, we appreciate your recognition of our method as valuable and useful in practice, as well as your acknowledgment that the justification of our method is well-motivated and clearly presented. Furthermore, we appreciate your invaluable suggestions, which significantly enrich the substance of our work. In the subsequent sections, we meticulously address each of your concerns as follows:
>**W1**. The related work section lacks comprehensiveness. While the authors mention some CLIP-based methods, they overlook approaches closely related to their own. Notably, NegLabel (ICLR2024) and CLIPN (ICCV2023) also use textual features for OOD detection and are very similar to the proposed method. The authors should include these works in their related work section, provide benchmarks, and critically analyze the advantages of their approach compared to these existing methods.
**A1**. Thank you for your feedback on the two related studies. We can compare our method with each of these studies as follows. This discussion will be added to the final version of our paper:
**Comparison with NegLabel**
NegLabel constructs an OOD corpus by selecting texts distant from the in-distribution texts from a predefined corpus, then compares the distances between the input image and those texts in the CLIP embedding space to detect OOD. While NegLabel shows high OOD detection performance on ImageNet (see Table B), it has the following limitations compared to our method:
- Although NegLabel does not require training additional parameters, it must compute the embeddings of all texts in the corpus and measure their similarities to the in-distribution texts to find the optimal OOD corpus for a given in-distribution.
Our training method also requires nearly the same cost as obtaining the embeddings of all texts within a predefined corpus and calculating the similarities between those embeddings and the task+trainable embeddings, as discussed in Section 4.1. Thus, NegLabel and our method require the same level of optimization cost.
- Since NegLabel uses the embeddings of the determined OOD corpus as they are, it falls behind our method, which has trainable parameters, in terms of generalization. To demonstrate this, we further compare our method and NegLabel in the medical image domain. Specifically, we treat the ISIC-18 skin lesion diagnosis dataset [1] as in-distribution and the PathVQA [2] and PatchCamelyon [3] datasets as out-of-distribution. The ISIC-18 skin lesion diagnosis dataset is an image classification benchmark for seven skin disease categories.
Table A: OOD detection in the medical image domain
| OOD: | PVQA | | PCAM | |
| ----------------- | ----- | ----- | ----- | ----- |
| **Method** | **FPR** ↓ | **AUROC** ↑ | **FPR** ↓ | **AUROC** ↑ |
| NegLabel | 37.44 | 94.11 |48.07 | 94.86 |
| CLIPN | 35.47 | 84.64 |3.10 | 98.76 |
| HFTT (ours) | 13.72 | 97.05 |4.95 | 98.35 |
Table A illustrates the limitations of NegLabel in terms of generalization. While NegLabel fails to construct an effective OOD corpus for the medical image dataset, our method achieves significantly higher performance by learning optimal embeddings for detection.
**Comparison with CLIPN**
CLIPN utilizes an additional "no" text encoder alongside CLIP. This additional text encoder predicts the probability that a given object is not present in an image. Thus, CLIPN predicts whether a given image is in-distribution or out-distribution by using the original CLIP text encoder and the "no" text encoder to estimate the probabilities, respectively. Images with a low probability of being in-distribution and a high probability of being out-distribution are identified as OOD.
Although CLIPN achieves high OOD detection performance on ImageNet (see Table B), it has the following limitations compared to our method:
- CLIPN requires significantly higher inference costs due to the use of an additional text encoder.
- While our method requires lightweight training that does not involve images, CLIPN demands extensive and expensive training of the "no" text encoder on large vision-language datasets.
- CLIPN can only be applied to tasks where the distinction between in-distribution and out-distribution is clear and straightforward, such as classification datasets. This is because all training images must be classified as either "yes" or "no" images. Therefore, it is unsuitable for tasks dealing with abstract concepts, such as hateful image detection, as discussed in Section 4.3 of our paper.
- Our method can be easily applied to any detection task defined in natural language, whereas CLIPN shows significantly degraded performance for in-distribution tasks that fall outside the training distribution of the "no" text encoder. In terms of applicability, our proposed method surpasses CLIPN. To demonstrate this, we further compare our method with CLIPN in the medical image domain. Table A illustrates the limitations of CLIPN in terms of generalization. While CLIPN effectively detects PCAM, it exhibits very low detection performance on PVQA. In contrast, our method achieves high performance on both OOD tasks.
Table B: OOD detection performance on ImageNet in-distribution (average for Texture, Places, SUN, and iNaturalist)
| OOD: | Average ||
| ----------------- | ----- | ----- |
| **Method** | **FPR** ↓ | **AUROC** ↑ |
| CLIPN | 31.10 | 93.10 |
| NegLabel | 25.40 | 94.21 |
| HFTT (ours) | 33.33 | 91.76 |
[1] Codella, Noel, et al. "Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (isic)." arXiv 2019.
[2] He, Xuehai, et al. "Pathvqa: 30000+ questions for medical visual question answering." arXiv 2020.
[3] Veeling, Bastiaan S., et al. "Rotation equivariant CNNs for digital pathology." MICCAI 2018.
---
Rebuttal Comment 1.1:
Title: I keep my score
Comment: Although authors did not answer to my question about the reasons for having only a partial comparison with NegLabel and CLIPN in the main paper, the authors provided a complete and clear comparison with the mentioned methods. Thus I keep my positive score. | Rebuttal 1:
Rebuttal: We genuinely appreciate the reviewers' dedicated time and their valuable feedback. Before addressing each reviewer's specific concerns in detail below, we would like to summarize here the contributions of our research that have been recognized and the aspects that have been enhanced in our study.
## Reviewer Acknowledgements of Our Paper's Strengths
- The theoretical justification of our proposed method is solid (CzuR, LurW).
- We demonstrate that it is possible to obtain an effective unwanted image detector without using images (CzuR, QxHY, kqnV).
- Our method is innovative, efficient, effective, and useful in practice (CzuR, LurW, QxHY, kqnV).
- The experiments are comprehensive, and the results show the effectiveness of our proposed method (LurW, kqnV).
- For the reproducibility of our method, we provide a cost analysis, implementation details, and code (LurW, kqnV).
## Enhancements in Our Paper
- We provide additional discussion on the comparison with state-of-the-art OOD detection methods.
- We present comparative results against NegLabel and CLIPN in the medical image domain.
- We offer experimental results and discussion regarding other foundation models, such as BLIP.
- We provide a discussion on the application of our method to multi-class classification tasks.
We provide detailed responses to each reviewer's comments below. We have diligently addressed each of the four reviewers' concerns to the best of our abilities, and believe that the results further underscore the contributions of our study. We plan to integrate the reviewers' suggestions into the revisions of the manuscript, as we believe these adjustments will significantly bolster the paper's overall strength. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Prospective Learning: Learning for a Dynamic Future | Accept (poster) | Summary: In this work, the authors formalize the notion of “prospective learning”, which considers that the data to be learned are sampled from a stochastic process, rather than being sampled from a fixed distribution. Perspective learning considers time by giving a set of hypotheses for each time step during inferences instead of just one fixed hypothesis as in the probably approximately correct (PAC) framework. The authors give a theoretical framework (similar to the one developed in PAC) to characterize under which condition a stochastic process can be learned by a time-aware (prospective learning) model. These are explained through their mathematical formulation and simple examples that highlight the main features of a stochastic process to determine if they can be “prospectively learned” or not. Finally, the authors perform a set of experiments, as real-world cases analogies of the simple examples, and show how time-aware prospective models can learn (or not) the tasks as described by their theory.
Strengths: - This manuscript gives a useful theoretical base for understanding and describing a very intuitive concept of learning considering time, instead of a static data generator distribution. The theoretical framework allows for guarantees of function approximation by a class of time-varying hypotheses, provides a solution in explicit form for a binary classification problem with Gaussian inputs, and also analyzes the complexity of learning a specific type of process (periodic process).
- Furthermore, the authors can land their theoretical work in very simple illustrative examples, making the work easier to understand. This are followed by very well executed larger scale experiment to verify if their theoretical assumptions and simple example conclusions hold for more complex settings.
- The work presented has been thoroughly studied from theoretical and experimental fronts, with a coherent narrative to explain an intuitive consideration that has been used in practice for a while in machine learning (as it is similar to RL, or meta-learning, but not the same).
- Their Appendix (FAQs) was very helpful in explaining how this framework relates to other work around meta-learning or continual learning. On this front, they also highlight how other methods or problem setups complement their introduced concept.
Weaknesses: - The complexity of the theoretical framework might make it challenging for some readers to follow. Despite this, the authors do seem to try to bridge the gap by providing explanations and building intuition around the theoretical developments. The appendix is particularly helpful in this regard.
- Following the previous comment, the clarity of the experimental results (section 5) seems to fall short compared to the well-developed theory part. The results for scenario 3 require further investigation, as they seem unexpected based on the theoretical predictions in the simple examples.
- While the appendix discusses how this intuition relates to existing concepts in RL and meta-learning, the "plain English" explanation, as they refer to it, can be further improved by drawing more formal connections between their framework and these established areas (See question below).
Technical Quality: 3
Clarity: 2
Questions for Authors: 1.- In definition 2 “Weak prospective learnability”, is a stochastic process “weakly learnable” if a model with a sequence of hypotheses can perform above chance with an arbitrarily low probability? Is it enough it exists one $t$ where this is true to consider it weakly learnable?
2.- Does the concept of aliasing apply when the hypothesis class is not enough to cover a periodic stochastic process as in example 1? On the other hand, would you see some redundancy in optimal hypotheses if this family is larger than the ones needed to describe the stochastic process?
3.- In section 2.1, I found a bit confusing the distinction between MLE and time-aware MLE models. Both models use MLE to find the solution and the difference is just how the hypothesis space is constructed. If this is correct, I would recommend changing it to “time-agnostic” vs “time-aware” or something along these lines.
4.- What is the intuition for the time-aware models not being able to solve scenario 3 in the experiments (section 5)? This is quite surprising considering that transformer architectures for example are auto-regressive and that there is some temporal structure that could be learned. Could you clarify what is the task structure, and how information is being given to the model, how is trained?
5.- A model with a class of hypotheses that change over time can learn prospectively if there is some temporal structure that can be exploited, but each risk term in equation 5 is instantly dependent on the current hypothesis, so the performance in the future is not affected by the current time step hypothesis. Why then might it be beneficial to decrease the discount factor $\gamma$? if each hypothesis is only influenced by its own time step what is the problem with having a $\gamma=1$?
6.- Considering Appendix A (FAQs), I believe a clearer connection between prospective learning and related concepts could be established by explicitly comparing their optimization objectives, training procedures, and problem structures. While this would undoubtedly be a significant undertaking, given the lack of standardized descriptions in these related fields, and likely beyond the scope of this work to address every single one, it would be a valuable contribution to readers' confusion.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors discuss the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful comments and valuable suggestions. We are glad that they have found the theoretical and experimental fronts of our work to be thorough. We also appreciate the positive aspects of our work, they have mentioned: (1) prospective learning as a useful theoretical foundation to describe including time in learning problems, (2) simple experiments that illustrate the theoretical findings, (3) well-executed large-scale experiments verifying the theoretical claims, and (4) clear descriptions of how prospective learning is related to other learning paradigms.
We have addressed the remaining comments and questions below. If our responses are satisfactory, we would be very grateful if you can champion the fresh perspective explored in our work.
> **The clarity of the experimental results (section 5\) seems to fall short compared to the well-developed theory part. The results for scenario 3 require further investigation, as they seem unexpected based on the theoretical predictions in the simple examples.**
We agree that some of the experiments were not sufficiently fleshed out in the original submission. Specifically, in Scenario 3, we chose a hidden Markov model for the data where the Markov chain corresponding to the data distributions had a steady-state distribution. As we discussed on Lines 125-139, we need to discount the prospective risk over time for prospective learning to be meaningful for stochastic processes which can reach a steady-state distribution. Therefore, although the original experiments seem unappealing at first because the prospective risk is trivial, the learner does converge to the Bayes risk and this actually follows the theory completely—just that the Bayes risk is trivially large for the example we chose.
We have fixed this now by choosing a hierarchical hidden Markov model where the process cannot reach a steady-state distribution. Please see the update experiment in Fig. 5 in the Rebuttal PDF.
> **Q1) In definition 2 “Weak prospective learnability”,....... Is it enough it exists one time point t where this is true to consider it weakly learnable?**
You are correct. We will fix the definition. For a process to be weakly-learnable, there must exist a time $t’$ such that for all times $t\>t’$, the inequality on risks holds.
> **Q2) Does the concept of aliasing apply when the hypothesis class is not enough to cover a periodic stochastic process as in example 1? On the other hand, would you see some redundancy in optimal hypotheses if this family is larger than the ones needed to describe the stochastic process?**
Great question\! When the hypothesis space is not rich enough, i.e., there does not exist a hypothesis whose prospective risk matches the Bayes risk, then time-aware ERM learns the optimal hypothesis in the hypothesis class (best-in-class). This can lead to aliasing. For example, if data is drawn from a periodic process with period 5 and the hypothesis only contains sequences with period at most 3, then the prospective risk of even the best in class hypothesis can be trivial due to aliasing. On the other hand, if the hypothesis class is too large for the uniform concentration assumption in Theorem 1 to hold, then the risk of time-aware ERM might not converge to Bayes risk.
The above paragraph is about hypothesis-class-based aliasing. There is another form of aliasing that can occur, this is due to sampling. Suppose we have a periodic process, but the samples we observe from it are received at a rate lower than the Nyquist rate. Models trained on such samples will suffer from aliasing. Sampling-based aliasing can also occur if the resolution of the time encoding is not sufficient. In simpler words, if the time instant of each datum is not recorded precisely, then the time-aware ERM learner might not see high-frequency fluctuations in the data stream. This is similar to how audio quality drops if an MP3 file is sub-sampled below 44.1 kHz.
In general, the answer to your question is similar to the corresponding answer one might give for PAC learning. We need some inductive bias for learning to be possible. This inductive bias can come from guessing the period (either directly, or searching for the correct period like we do in structured risk minimization), or from guessing the required resolution of the time encoding (one might need to record time in seconds for high-frequency data, but only in weeks for visual recognition data).
> **Q3) Both models use MLE to find the solution and the difference is just how the hypothesis space is constructed. If this is correct, I would recommend changing it to “time-agnostic” vs “time-aware” or something along these lines.**
We agree that it is confusing. We will fix this to say “time-agnostic ERM” vs. “Prospective ERM”.
> **Q4) What is the intuition for the time-aware models not being able to solve scenario 3 in the experiments (section 5)? This is quite surprising considering that transformer architectures for example are auto-regressive and that there is some temporal structure that could be learned.**
In Scenario 3, we had chosen a hidden Markov model for the data where the Markov chain corresponding to the data distributions had a steady-state distribution. As we discussed on Lines 125-139, we need to discount the prospective risk over time for prospective learning to be meaningful for stochastic processes which can reach a steady-state distribution. The prospective risk does converge to the Bayes risk—just that the Bayes risk is trivially large for the example we chose.
Please see the updated experiment in Fig. 5 in the Rebuttal PDF.
To clarify the experimental setup in Scenario 3, we generated hidden states of an HMM using a transition matrix $\\Gamma\_4$ on Line 740 in Appendix C. Each hidden state corresponds to a specific distribution/task, in our case, a set of classes, detailed on Line 728 in Appendix C. Data are drawn from this distribution/task.
---
Rebuttal 2:
Title: Rebuttal by Authors pt. 2
Comment: > **Q5) A model with a class of hypotheses that change over time can learn prospectively if there is some temporal structure that can be exploited, but each risk term in equation 5 is instantly dependent on the current hypothesis, so the performance in the future is not affected by the current time step hypothesis. Why then might it be beneficial to decrease the discount factor gamma \= 1 if each hypothesis is only influenced by its own time step what is the problem with having a gamma=1**
This is a good point. Yes, future loss at time $t’$ is not affected by the current hypothesis at time $t$. We choose the discounting factor to emphasize some of the real world scenarios that only want to predict the near future. In Scenario 3, for a mixing Markov chain, choosing $\\gamma=1$ can give trivial solutions; after the Markov chain mixes completely, there are no long-term predictable patterns. However, in the near future when the chain has not mixed completely, time-aware ERM can exploit the correlations for prospective learning. One can restrict the time-horizon in prospective learning by simply cutting off time at some large value, or by discounting the loss.
> **Q6) Considering Appendix A (FAQs), I believe a clearer connection between prospective learning and related concepts could be established by explicitly comparing their optimization objectives, training procedures, and problem structures. While this would undoubtedly be a significant undertaking, given the lack of standardized descriptions in these related fields, and likely beyond the scope of this work to address every single one, it would be a valuable contribution to readers' confusion.**
> **While the appendix discusses how this intuition relates to existing concepts in RL and meta-learning, the "plain English" explanation, as they refer to it, can be further improved by drawing more formal connections between their framework and these established areas (See question below).**
This is a very good idea and we are very thankful for it. We will update Appendix A in the camera ready version with formal connections to existing ideas in the field. This will allow us to explicitly compare the different problem setups. If this content expands beyond the scope of this present paper on prospective learning, we are very keen on writing a separate manuscript. This can be very valuable to our field.
---
Rebuttal Comment 2.1:
Comment: Thanks to the authors for addressing my concerns. I'm generally satisfied with the clarifications provided, but I do have a few additional comments:
In Q4: What is the difference between the original scenario 3 experiment in Section 5 and the new one? The old data distribution had a steady-state distribution, which had a higher Bayes risk—why is this? Is it because the hidden state defining the task distribution was changing constantly, but the overall sampling process was in steady-state? Is the new configuration and the results shown in Fig. 5 designed to create a significant period within training (10 timesteps) with a specific hidden state, and then clearly switch between hidden states to generate a meaningful learning signal? I did my best to rephrase what I understood from the rebuttal—is the explanation along these lines?
Would the time-agnostic MLP model be able to solve the task if some context information (e.g., the current task ID) were given as input? Are you providing any context information to the time-aware model? If not, is the time-aware model somehow inferring the hidden state of the data generation process? If so, that would be useful to highlight as an advantage of the proposed method.
The fact that the originally presented model was not able to learn the task properly based on the structure of the task is not a flaw in itself. In fact, keeping that experiment and using the new one to clearly point out why we see a different result now would be really helpful for the reader.
Also, it seems that the labels in Fig. 5 are swapped.
Q5: Thanks, this makes sense now, and I guess it is related to the explanation in Q4?
---
Reply to Comment 2.1.1:
Comment: Thank you for taking the time to respond to our rebuttal. We are glad that your concerns were addressed. If you are satisfied with our response, we would be grateful if you could raise your score and champion the acceptance of our paper.
> **The old data distribution had a steady-state distribution, which had a higher Bayes risk—why is this?**
> **What is the difference between the original scenario 3 experiment in Section 5 and the new one?**
Consider the following example. Suppose we have a Markov chain corresponding to switching between two tasks $P\_1$ and $P\_2$ with a transition matrix \[\[0.2, 0.8\], \[0.8, 0.2\]\], i.e., if the data at the current timestep was drawn from $P\_1$, in the next timestep, data will be drawn from the task $P\_2$ with probability 0.8. The steady-state distribution of this Markov chain is \[0.5, 0.5\]. In other words, as the Markov chain approaches the steady-state distribution (which happens asymptotically), the prospective learner loses the ability to predict which state the Markov chain will be at some future time $t’$. The prospective risk in Eqn. (1) takes the limit $\\tau \\to \\infty$.
Suppose now that task $P\_1$ has classes $\\{1, 2\\}$ and task $P\_2$ has classes $\\{2, 1\\}$, i.e., the labels are flipped, then the prospective Bayes risk is trivially 0.5. If the classes do not “clash”, i.e., $P\_1 \\equiv \\{1, 2\\}$ and $P\_2 \\equiv \\{3, 4\\}$, then notice that a learner that predicts both $\\{1, 3\\}$ for inputs corresponding to ground-truth classes 1 and 3, and both $\\{2, 4\\}$ for inputs from ground-truth classes 2 and 4 will achieve zero prospective risk. Therefore, a trivial Bayes risk in prospective can come from (a) the Markov chain having a steady-state distribution, and (b) clashes of the classes between the tasks.
In the Scenario 3 example in the original submission, we had chosen an example where both (a) and (b) were occurring. The stationary distribution for Figure 3 was a weighted mixture of the distributions of the 4 tasks created from the CIFAR10 dataset (transition matrix is $\\Gamma\_4$ on Line 740 in Appendix C) and tasks were the ones given on Line 728 in Appendix C (which do “clash”). As a result, no single hypothesis can simultaneously achieve low risk on data from all 4 tasks. Hence, no hypothesis can achieve a low risk on the stationary distribution, in other words the prospective Bayes risk is high.
On the other hand, the example in the rebuttal PDF considers a hierarchical HMM which does not have a stationary distribution. The hierarchical HMM transition every 10 steps across two sets of communicating states. As a result, the distribution in the future is predictable, and there exists a hypothesis that achieves a low prospective risk on this distribution.
> **Would the time-agnostic MLP model be able to solve the task if some context information (e.g., the current task ID) were given as input?**
Yes. However, task boundaries are not available in prospective learning.
> **Are you providing any context information to the time-aware model?**
No. There is no contextual information available to the time-aware learner.
> **If not, is the time-aware model somehow inferring the hidden state of the data generation process?**
Yes. Think of how an ideal prospective learner might work: it would observe the data at each time-step, use an algorithm like Baum-Welch to estimate the transition matrix for the tasks and build a hypothesis for each unique task. For inference at time $t’$, it would predict the distribution over tasks at time $t’$ based on the data observed up to time $t$, and use a hypothesis corresponding to the most likely task, or sample the hypothesis with a probability proportional to its corresponding task being present at time $t’$. This is of course the ideal prospective learner, and it would converge to the Bayes risk under standard assumptions (e.g., consistency of the Baum-Welch estimates).
It is interesting that in Theorem 1, we could show that time-aware ERM can also converge to the Bayes risk. This is precisely because it is doing something equivalent to inferring the hidden state of the data generating process.
We will include this discussion as a remark at the end of Section 4\.
> **keeping that experiment and using the new one to clearly point out why we see a different result now would be really helpful for the reader.**
We will keep the original example of Scenario 3 and contrast it with the new example. We agree that this should be very useful to the reader.
> **Also, it seems that the labels in Fig. 5 are swapped.**
Indeed. We apologize for the oversight. We will fix it.
> **Q5: Thanks, this makes sense now, and I guess it is related to the explanation in Q4?**
Yes, discounting the risk changes the problem to effectively have a finite horizon. If the Markov chain does not mix by the end of this effective time-horizon, the prospective Bayes risk can be non-trivial even if the tasks clash. | Summary: The paper focuses on new paradigm of learning called "prospective learning" oppose to Probably Approximately Correct (PAC) paradigm which is how current AI systems are being designed. PAC is time agnostic given the data while PL is time-aware. The paper clearly outlines different scenarios of PL with examples and distinguishes the proposed paradigm with others. Also, showcases experimental validation of PL in different scenarios.
Strengths: - The paper is clearly written and easy to follow
- Paper seems to pose a new realistic paradigm of learning which is incorporates time components compared to traditional paradigm which is time-agnostic.
- The paper has strong theoretical background.
Weaknesses: - Paper fails to explain how training on unlabeled data would be yield in the proposed paradigm.
- The authors also don't consider the real world natural scenario of non-IID data where the data is a continuous video whose stochastic process will be unknown.
- Paper lacks explanation on how PL would handle or prevent catastrophic forgetting since it is arises from the distribution shift with time in PAC paradigm.
Technical Quality: 3
Clarity: 3
Questions for Authors: - If Data-incremental learning is more closer to the problem formulation of PL, then how does multiple epochs are handled across all three cases of data type (IID, non-IID, etc)?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments on our work. We are glad that they recognize our theoretical contributions.
> **Paper fails to explain how training on unlabeled data would be yield in the proposed paradigm.**
This is a good question. We have been inspired by the work of Steve Hanneke \[1\]. They develop some results for the so-called “self-adaptive setting” where in addition the past data ($z\_{\\leq t}$ in our notation), the learner also has access to future inputs $\\{x\_s: t \\leq s \\leq t’\\}$ when it makes a prediction at time $t’$. Hanneke showed that there exist optimistically universal learners for the self-adaptive setting. In simple words, if a stochastic process can be self-adaptively learned strongly, then there exists a learner which does so. Just like our Theorem 1, their result is a consistency result.
Hanneke does not consider the setting where the true hypothesis changes over time, which is what we are interested in. But we believe that their work gives us a nice foundation to build upon as we try to understand how to use unlabeled future data for prospective learning. This is part of future work. We will mention this in the discussion.
\[1\] Hanneke, Steve. "Learning whenever learning is possible: Universal learning under general stochastic processes." *Journal of Machine Learning Research* 22.130 (2021): 1-116.
> **The authors also don't consider the real world natural scenario of non-IID data where the data is a continuous video whose stochastic process will be unknown.**
In fact, prospective learning is ideally suited to address problems with non-IID data, e.g., situations where one is modeling data from a video. The reason for this is as follows.
Theorem 1 works for general stochastic processes. It does not need to know the class of the stochastic process, e.g., whether it is a Markov chain, or a hidden Markov model etc. This is why we think it is quite remarkable. Theorem 1 says that in spite of such a general setting, given samples from the process up to time $t$, we can build a sequence of hypotheses that can be used for any time in the future. Just like standard empirical risk minimization (ERM) allows one to build a hypothesis that can be used for any test datum drawn from the same distribution, Theorem 1 allows us to learn any stochastic process using a procedure that is conceptually quite similar to ERM, except that the predictor takes time as input. The consistency of standard ERM is one of the first results of PAC learning—and a cornerstone result. Theorem 1 is a similar result for prospective learning. Just like standard ERM makes certain assumptions about the concentration of the empirical loss and the learner’s hypothesis class, Theorem 1 also makes assumptions about consistency and concentration in Eqns. (3-4). These are rather standard, and quite benign, assumptions.
Real-world stochastic processes, e.g., images in a video, may or may not satisfy the conditions of Theorem 1\. This is no different from how real-world data may or may not satisfy the assumptions of standard PAC learning. We cannot easily verify the assumptions of PAC learning in practice (whether the hypothesis class contains a model which achieves zero population loss equal to Bayes error, and whether we have uniform concentration of the empirical loss to the test loss). And similarly, assumptions of prospective learning may be difficult to verify in practice. But this does not stop us from implementing PAC learning. And similarly, one can implement prospective learning for video data.
The most important practical recommendation of our paper for implementations on video data is that the architecture should encode the time of each image frame, i.e., implement time-aware ERM in Eqn. (5).
> **Paper lacks explanation on how PL would handle or prevent catastrophic forgetting since it is arises from the distribution shift with time in PAC paradigm.**
Catastrophic forgetting in PAC learning occurs when future training data has a different distribution than past data. This is exactly the setting addressed in prospective learning. In simple words, the central idea of our paper is that if the distribution of data changes in a predictable fashion, then prospective learning can prevent catastrophic forgetting. To give an example, if the stochastic process is periodic, i.e., there is a finite number of distinct distributions that are seen as a function of time, then time-aware ERM in Theorem 1 selects a sequence of hypotheses, one element is assigned to each of these marginal distributions. And given a test datum at a particular future time, the learner simply selects the appropriate predictor for that time. Standard PAC learning would not be able to address such periodic shifts because it does not model changes over time.
> **If Data-incremental learning is more closer to the problem formulation of PL, then how does multiple epochs are handled across all three cases of data type (IID, non-IID, etc)?**
Prospective learning enforces no computational constraints on the learner. This is different from other settings where one might be interested in such constraints, e.g., for computational reasons in data incremental learning, or for biological reasons in continual learning. A prospective learner is allowed to train on all past data for as many epochs as necessary. To clarify, as mentioned in footnote 7 on Page 8, we do not do any continual updates for any of the experiments.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for addressing my concerns. I am satisfied with the above clarifications. | Summary: Update: I read the rebuttal and I found it convincing, especially the explanation of the main theorem of the paper. Additionally, the time-aware ERM idea based on time-conditioning sounds nice.
---
This paper proposes a prospective learning framework in which a sequence of future hypotheses is produced using all examples which have already been observed over the course of training. In principle, this framework allows for problems where the data is drawn from a stochastic process which introduces temporal correlations and non-identical distributions over time steps. The theory introduced for this seems interesting, but I don't see any particularly surprising results. In particular, I don't see any constructive results establishing a benefit for using the "prospective transformer" as opposed to the other methods, such as conditioning on the time step. Finally, the experimental setups seem very simplistic and contrived, and the conclusions of the experiments seem muddled.
notes from reading the paper:
-Prospective learning is time-aware alternative to PAC learning.
-Paper shows failure cases of PAC learning on some simple problems and also establishes some new algorithms on MNIST/CIFAR variants.
-Prospective learning assumes data is drawn from an unknown stochastic process.
-Data defined as z_t = (x_t, y_t).
-Prospective learner maps from history of data to sequence of hypotheses on all time steps.
-Only focus on expected future loss, and not risk.
Strengths: -The paper is very nicely written and the idea is presented cleanly and clearly.
-The topic of improving learning in non-stationary environments and framing this problem well, is very important.
Weaknesses: -The experimental settings seem very artificial. Additionally, the difference between the methods is unclear in terms of performance, and none of them generally achieve the bayes optimal performance.
Technical Quality: 3
Clarity: 3
Questions for Authors: -Do you think it could be possible to use reinforcement learning (games) to get a more natural application for your framework? For example, in a game like montezuma's revenge, the agent should get farther over the course of training and will be able to see new challenges and new parts of the environment.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: -The proposed "prospective transformer" I believe is not new, but was explored here (and I believe in many earlier papers as well), yet I don't see that these were cited (https://arxiv.org/pdf/2404.19737).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We are glad that they find our theoretical contributions to be important for learning in non-stationary environments. We believe we have addressed all your concerns in the response below. If you think these responses are satisfactory, we would be very grateful if you can update your score.
> **The theory introduced for this seems interesting, but I don't see any particularly surprising results. In particular, I don't see any constructive results establishing a benefit for using the "prospective transformer" as opposed to the other methods, such as conditioning on the time step**
Let us argue why Theorem 1 is somewhat remarkable. Consider a general stochastic process (does not have to be periodic, or reach a steady-state distribution). Theorem 1 says that in spite of such a general setting, given samples from the process up to time $t$, we can build a sequence of hypotheses that can be used for any time in the future. Just like standard empirical risk minimization (ERM) allows one to build a hypothesis that can be used for any test datum drawn from the same distribution, Theorem 1 allows us to learn any stochastic process using a procedure that is conceptually quite similar to ERM, except that the predictor takes time as input. The consistency of standard ERM is one of the first results of PAC learning—and a cornerstone result. Theorem 1 is a similar result for prospective learning. It is a definitive first step on this important problem.
We have made some minor modifications to the experimental parts of this paper since the deadline. Please see the common response to all reviewers above.
In the context of your question, we have now figured out how to exactly implement a time-aware ERM in Eqn (5). It is simply a network that is trained on a dataset of inputs (t, X\_t) to predict the outputs Y\_t. Any MLP, CNN, or an attention-based network can be repurposed to use this modified input using an encoding of time (Line 326). In particular, there is no need to use an auto-regressive loss like we had initially done in the NeurIPS submission.
Theorem 1 directly suggests this practical solution. Roughly speaking, for a practitioner who wishes that their models do not degrade as data changes over time, our paper proves, theoretically and empirically, that appending time to the train and test input data is sufficient. We suspect this is a big deal.
> **The experimental settings seem very artificial. Additionally, the difference between the methods is unclear in terms of performance, and none of them generally achieve the bayes optimal performance.**
This is a theoretical paper. We have constructed a, more or less, exhaustive set of “scenarios” (IID data in Scenario 1, independent but not identically distributed data in Scenario 2, neither independent nor identically distributed data in Scenario 3, and situations where past predictions affect future data in Scenario 4). Our goal while doing so is to study precisely the performance of our proposed time-aware ERM and some baseline methods. In particular, we can calculate the Bayes risk for the experiments in Fig 1\. This is the gold standard for any method.
When using non-synthetic data, our experimental scenarios are quite similar to those in existing papers on continual learning. The gold standard for experiments on non-synthetic data would be Oracle, which is a learner that knows exactly the distribution from which the test datum at any future time $t’$ is sampled; the risk of Oracle is therefore even “better” than Bayes risk. As we see, the training curves in Fig. 2 do converge to the risk of Oracle over time.
In Scenario 2, time-aware ERM achieves Bayes risk while time-agnostic methods only achieve trivial risk. This shows that time-agnostic methods fail to solve even a simple synthetic problem.
In our original Scenario 3, the transition matrix for the Markov chain of the hidden states was chosen to be such that the chain could converge to a steady-state distribution. As we discussed on Lines 125-139, prospective learning is only meaningful in these settings if one uses a discounted loss. Our experiments were using the non-discounted loss, and therefore the prospective risk of time-aware ERM was converging to the trivial risk. Time-agnostic ERM converges to the trivial risk for Scenario 3, in general.
We have rectified the situation now using a different hidden Markov chain which does not have a steady-state distribution. See Fig. 5 in the PDF Rebuttal. Time-aware ERM for the modified Scenario 3 does converge to the Bayes risk over time.
> **Do you think it could be possible to use reinforcement learning (games) to get a more natural application for your framework? For example, in a game like montezuma's revenge, the agent should get farther over the course of training and will be able to see new challenges and new parts of the environment.**
This is a very good idea. In our Scenario 4, we have discussed the long term risk of a Markov Decision Process. We also developed an interesting “algorithm” for this in Appendix F. In broad strokes, this setting is similar to the problem in Montezuma’s revenge where future data depends upon past decisions. We have not instantiated the learner for Scenario 4 in experiments on non-synthetic data yet. This is mostly because the current algorithm in Appendix F involves learning both the value function and forecasting the future data, this would be difficult to pull off together. We plan to investigate this in the future.
---
Rebuttal 2:
Title: Rebuttal by Authors Pt. 2
Comment: > **The proposed "prospective transformer" I believe is not new, but was explored here (and I believe in many earlier papers as well), yet I don't see that these were cited**
We thank the reviewer for pointing us to this, and we will cite this in our paper. In the linked paper, the authors introduce an auxiliary loss during training that forces the model to predict the next $n$-tokens at once. But for inference, they only perform next-token prediction. In “our prospective transformer”, we have the network predict the outcome of a future datum, given past data in context. During inference, both networks (prospective and auto-regressive) in our work are capable of making predictions arbitrarily far into the future. This was made possible by the choice of the positional (time) embedding we use in these models, which is different from the ones used in language modeling.
All this said, please see the common response to all reviewers which discusses our proposed modifications to the experimental parts of this paper. We have now figured out how to exactly implement a time-aware ERM in Eqn (5). It is simply a network that is trained on a dataset of inputs (t, X\_t) to predict the outputs Y\_t. Any MLP, CNN, or an attention-based network can be repurposed to use this modified input using an encoding of time (Line 326). There is no need to use the prospective or auto-regressive Transformer to implement prospective learning like we had initially done in the submission. We will therefore remove the method named “prospective Transformer” in the camera ready version of the paper.
We will summarize this response as a footnote and cite the paper that you have linked to. | Summary: The paper develops a new theoretical framework to address machine learning problems where data distributions and objectives evolve over time. Unlike the traditional PAC learning framework that assumes static distributions, this paper introduces "Prospective Learning" (PL), which models data as a stochastic process and emphasizes minimizing future risk based on past data. This approach integrates time as a crucial element in learning goals and algorithm design.
Strengths: I like the fact that the paper considers fundamental problem and very general framework, inspired by motivating examples such as biological systems. There are many solid contributions. First, the paper formally defines PL by incorporating time into the hypothesis class, distinguishing it from PAC learning, and characterizes the stochastic processes that are either strongly or weakly prospectively learnable, offering theoretical guarantees for PL under reasonable assumptions. Second, it proposes a "time-aware" version of ERM, showing that factoring in time during hypothesis selection can solve prospective learning problems that traditional ERM methods, which ignore time, cannot. Lastly, the paper includes numerical experiments on datasets like MNIST and CIFAR-10, illustrating that time-aware learners significantly outperform their time-agnostic counterparts in dynamic environments. The paper is well-written and the main messages are quite clear.
Weaknesses: The theoretical results are primarily asymptotic, and the applicability to finite-sample settings requires further investigation. In the definition of the prospective risk around (1), the limit is taken to the infinity. In reinforcement learning there are both asymptotic regret with a discounted factor (which looks similar to the definition in the paper) as well as finite-sample regret. Is it possible to develop some results for finite-sample, or having to rely on asymptotic is a limitation of the current framework?
Implementing time-aware ERM and scaling it to modern large-scale ML and popular models may be challenging, particularly for deep architecture, big data, and real-time applications. The paper would benefit from a more detailed discussion on the computational complexity and practical considerations of the proposed methods. Comments on either computational complexity from the theory side or practical scalability from the application side are welcomed.
Technical Quality: 4
Clarity: 4
Questions for Authors: When comparing with online learning and sequential decision making, the author emphasize that the optimal hypothesis can change over time. However, there has been a line of literature (e.g., see Besbes et al. 2015 and related literature) studying the so-called “dynamic regret” and non-stationary online learning, where the optimal hypothesis can change every round. This line of work proposes various conditions to characterize the learnability and complexity of non-stationary online learning, such as the variation of the loss function. I think a comparison of prospective learning and non-stationary online learning should be added into the paper.
Omar Besbes, Yonatan Gur, and Assaf Zeevi. Non-stationary stochastic optimization. Operations Research, 63(5):1227–1244, 2015.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors discuss the limitations of the paper while more discussion on comparison to dynamic regret and non-stationary online learning, finite-sample guarantees, and computational complexity, should be added. The paper is primarily theoretical and does not have direct societal consequence.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful comments and valuable suggestions. We are glad that they recognize that prospective learning addresses a fundamental problem. And we are glad that they recognize the theoretical and experimental contributions of our work. If the Reviewer thinks our responses are satisfactory, we would be very grateful if they can increase their score.
> **The theoretical results are primarily asymptotic, and the applicability to finite-sample settings requires further investigation. In the definition of the prospective risk around (1), the limit is taken to the infinity. In reinforcement learning there are both asymptotic regret with a discounted factor (which looks similar to the definition in the paper) as well as finite-sample regret. Is it possible to develop some results for finite-sample, or having to rely on asymptotic is a limitation of the current framework?**
The result in Theorem 1 is about the asymptotics. In Remark 2, we do provide a sample complexity bound for prospective learning of periodic processes. As future work, we are indeed exploring sample complexity of prospectively learning more general stochastic processes such as Hidden Markov models.
The limit as $\\tau \\to \\infty$ on Line 80 in the definition of prospective risk is different from assuming an asymptotically large number of samples in Theorem 1\. Line 80 uses the infinite horizon risk for the following reason. In prospective learning, if one uses a finite time horizon, then it becomes similar to a multi-task learning problem. This is investigated in works like \[27, 28, 29\] cited in the paper. We focussed on the infinite horizon prospective learning problem to force the learner to model the evolution of the stochastic process. We appreciate the Reviewer’s point. For some processes, e.g., Markov processes that reach a steady-state distribution, it is not possible to prospect arbitrarily far into the future. For such processes, we have to use a discounted prospective risk (mentioned on Line 82). We have now proved a corollary of Theorem 1 for discounted risks, which we will add to the main paper. We will also cite the finite-sample regret bounds from the RL literature.
> **Implementing time-aware ERM and scaling it to modern large-scale ML and popular models may be challenging, particularly for deep architecture, big data, and real-time applications. The paper would benefit from a more detailed discussion on the computational complexity and practical considerations of the proposed methods. Comments on either computational complexity from the theory side or practical scalability from the application side are welcomed.**
We have figured out a few minor changes to the experimental section of the paper since the deadline. Please see the common response to all Reviewers for these proposed changes.
Time-aware ERM is actually quite easy to implement. It is simply a network that is trained on a dataset of inputs (t, X\_t) to predict the outputs Y\_t. Any MLP, CNN, or an attention-based network can be repurposed to use this modified input using an encoding of time (Line 326). In practice, perhaps the only thing one must be careful about is that the encoding of time should be sufficiently rich to incorporate all past data. In other words, we are interested in an absolute encoding of time, unlike Transformers where position encoding is used only within the context window. This is easy to achieve using a large number of logarithmically-spaced Fourier frequencies, or a binary encoding time in minutes/seconds, like we have shown in the experiments.
We will add the above response as a remark in the paper.
> **When comparing with online learning and sequential decision making, the author emphasize that the optimal hypothesis can change over time. However, there has been a line of literature (e.g., see Besbes et al. 2015 and related literature) studying the so-called “dynamic regret” and non-stationary online learning, where the optimal hypothesis can change every round. This line of work proposes various conditions to characterize the learnability and complexity of non-stationary online learning, such as the variation of the loss function. I think a comparison of prospective learning and non-stationary online learning should be added into the paper.**
> **Omar Besbes, Yonatan Gur, and Assaf Zeevi. Non-stationary stochastic optimization. Operations Research, 63(5):1227–1244, 2015\.**
Thank you for the reference. We will clarify the distinction in the camera ready version.
Although in both cases the optimal hypothesis changes over time, the stochastic approximation problem studied in Besbes et al. 2015 and the prospective learning problem studied in this paper are somewhat different. The former optimizes over stochastic processes X\_t to minimize a dynamical regret, finding the stochastic approximation of the optimal point of the regret, while the latter optimizes over hypothesis class H\_t to find the best hypothesis.
> **More discussion on comparison to dynamic regret and non-stationary online learning, finite-sample guarantees, and computational complexity, should be added**
We appreciate the reviewer’s constructive feedback and we will expand the paper to include a detailed discussion on the topics above. Our plan is to expand the FAQ to discuss these ideas and formally compare prospective learning to these ideas.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed answer to my questions! I am satisfied with the feedback and maintain my score, in favor of accepting the paper. | Rebuttal 1:
Rebuttal: ### **Common response to all Reviewers**
We thank the reviewers for lending their expertise to assess our work and for helping us improve it. We are glad that the reviewers are positively inclined towards this work. The find that the problem is fundamental, and our paper presents a general framework with strong theory [NX2Q, wUXj, Zv5g], the problem is important, the paper is well-written [2cje, wUXj], the work is thorough with illustrative examples [Zv5g]. Individual responses to each reviewer are included below.
Based on Reviewer feedback, we will make the following changes to the manuscript to improve the experimental parts of the paper.
**New experiments on Scenario 3 (neither independent nor identically distributed data).**
For Scenario 3, we had chosen a Hidden Markov Model where the Markov chain corresponding to the data distribution had a steady-state distribution. As we discussed on Lines 125-139, one needs to discount the prospective risk over time for learning to be meaningful for such processes. Experiments on CIFAR-10 in Fig. 3 did not use discounting and that is why the prospective risk was high. The experiment was correct, just that even the Bayes risk for this process is quite large (see Lines 125-139).
In Fig 5 in the Rebuttal PDF, using synthetic data, we set up a hierarchical HMM where the underlying Markov chain has two communicating sets of states, and the chain transitions across these sets deterministically after 10 timesteps. This process does not have a steady state distribution. And like our theory suggests, time-aware ERM (renamed to Prospective ERM) converges to the Bayes risk. We will add this figure to the paper. We will also conduct a similar experiment using a hierarchical HMM on CIFAR-10 data to replace Fig. 3; we expect the results to be similar to those in Fig. 5.
**Comparing time-aware ERM with other continual learning algorithms.**
Prospective learning is designed to address situations when data, and the optimal hypothesis, evolves over time. Task-agnostic online continual learning methods are the closest algorithms in the literature that can work in this setting. We implemented (a) Follow The Leader, (b) Online SGD, and (c) online variational Bayes [1]. These algorithms are not designed for prospective learning but they are designed to address the changing data distribution $t$. The setup is identical to that of Fig. 2 in the paper, i.e., Scenario 2 with independent but not identically distributed data. As Fig. 4 in the Rebuttal PDF shows, on tasks constructed from both synthetic and MNIST data, these baseline algorithms achieve trivial prospective risk. Even the average risk on *past data* for these baselines is trivial (0.5 for synthetic data, 0.75 for MNIST). This experiment suggests that these three methods cannot handle changing data distributions. Even if there are only two distinct distributions of data, and these changes are predictable. In contrast, the prospective risk of Time-MLP converges to zero over time.
Fig. 4 in Rebuttal PDF will replace Fig. 2a and 2b in the current manuscript. We will conduct a similar experiment on CIFAR-10 to replace Fig. 2c.
See footnote 7 on Page 8 in the paper. In our experiments, for every time $t$ we use 3 training datasets that are realizations of $z\_{\\leq t}$ and calculate the prospective risk on 100 realizations of $Z\_{\> t}$. For a time-horizon of 400, this entails 1,200 trained models for each method. This is why we could not finish similar experiments on CIFAR-10. But we will add them to the camera ready.
[1] Zeno, Chen, et al. Task agnostic continual learning using online variational Bayes. arXiv:1803.10123 (2018).
**Cleaning up the different learners. There will be only two methods: Time-agnostic and Time-aware ERM (renamed to Prospective ERM); Auto-regressive and prospective Transformer will be moved to the Appendix.**
We have now figured out a way to exactly implement a time-aware ERM (will be renamed to Prospective ERM) in Eqn (5). Theorem 1 suggests that we must simply use a network that is trained on a dataset of inputs $(t, X\_t)$ to predict the outputs $Y\_t$. Any MLP, CNN, or an attention-based network can be repurposed to use this modified input using an encoding of time (Line 326). In particular, there is no need to use an auto-regressive loss, or fit a prospective Transformer like we had initially done in the manuscript. Roughly speaking, for a practitioner who wishes that their models do not degrade as data changes over time, our paper proves, theoretically and empirically, that appending time to the train and test input data is sufficient.
We will therefore move experiments that use auto-regressive and prospective Transformers in Fig. 2 and 3 to the Appendix. Continual learning baselines will be added to those figures instead, as discussed above.
**Large language models may not be good prospective learners**
It is an interesting question as to whether LLMs which are trained using auto-regressive likelihoods with Transformer-based architectures can do prospective learning. To study this, we used LLama-7B and Gemma-7B to evaluate the prospective risk for Scenarios 1 to 3\. The prompt contains a few samples from the stochastic process (sub-sequences of $(Yt)$ consisting of 0s and 1s) and an English language description of the family of stochastic processes that generated the data. The LLM is tasked with completing the prompt with the next 20 most likely samples. The data and evaluation setup is identical to Fig. 1 in the paper. As Fig. 6 in the Rebuttal PDF shows, prospective risk does not improve over time for any of the LLMs. In contrast, risk of the time-aware MLE in Fig. 1 in the main paper or Fig. 6 in the Rebuttal converges to the Bayes risk. These results will be added to Section 5.
We believe these changes to the experimental section will improve the quality of our paper. We are very thankful to the Reviewers for suggesting some of these changes.
Pdf: /pdf/7ca30f05267e82267e7325b0df2568469bda3ba9.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Non-asymptotic Approximation Error Bounds of Parameterized Quantum Circuits | Accept (spotlight) | Summary: This is a solid paper for approximating Hölder smooth functions using parameterized Quantum Circuits (PQC). The results show that using PQC for approximation can achieve better results than those in Lu's paper, especially when $K$, the length of the local region of Taylor expansion, is not large and the dimension $d$ is large.
Strengths: Finding the approximation rate for different structures in deep learning is an important task to understand deep learning. The author presents approximation results for PQC, and I find this result interesting.
Weaknesses: 1. For continuous and Lipschitz continuous functions, the author only establishes the Universal Approximation Theory. Can the author improve the result to find the approximation rate based on [22] cited in the paper?
2. The author's result is better than Lu's paper since they consider the approximation by $L^\infty$-norm. In [22], it is shown that for the $L^p$ norm, the coefficient in Lu's paper can also not exponentially depend on $d$. In this case, is the result in this paper still better than Lu's paper?
3. Based on my knowledge, in Lu's paper, they obtain a better rate of $K$ than the author's paper due to the use of the bit extraction technique, achieved by ReLU FNN, but the parameters in this technique need to be very large. Therefore, can your paper provide the bound of parameters? If so, this would be a significant benefit of your paper.
4. The structure of the neural network is hard to train since it is not wide but deep when $K$ is large. Can you make the neural network shallower and wider?
5. I think some tables and comments in the appendix can be shown on the main page such as Table S1.
Technical Quality: 3
Clarity: 2
Questions for Authors: Mentioned in the Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: All right.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and valuable comments. We would like to provide a detailed response to the insightful questions raised by the reviewer.
> 1. For continuous and Lipschitz continuous functions, the author only establishes the Universal Approximation Theory. Can the author improve the result to find the approximation rate based on [22] cited in the paper?
Reply 1: In fact, we do prove the approximation rate for Lipschitz continuous functions, for both the Bernstein polynomial approximation and the local Taylor expansion method. For the PQC approximation error bound using Bernstein polynomials, please refer to Theorem 3. For PQC approximation error bound using local Taylor expansion, please refer to Theorem 4. We would like to remind that the class of Hölder smooth functions considered in Theorem 4 includes Hölder continuous functions and Lipschitz continuous functions. Details can be referred to the definition of Hölder smooth functions in Lines 208–214.
> 2. The author's result is better than Lu's paper since they consider the approximation by $L^\infty$-norm. In [22], it is shown that for the $L^p$-norm, the coefficient in Lu's paper can also not exponentially depend on $d$. In this case, is the result in this paper still better than Lu's paper?
Reply 2: We would like to thank the reviewer for the insightful question. We believe that PQC function approximation under the $L^p$ norm is certainly worth further investigation, which would require careful calculations and proofs. For now we cannot say whether our PQCs would have better performance than Lu's paper for approximation under the $L^p$-norm, however, we believe that using the technique in [22] could aid in studying PQC approximation performance under the $L^p$ norm, and we will certainly pursue this in a later stage.
> 3. Based on my knowledge, in Lu's paper, they obtain a better rate of $K$ than the author's paper due to the use of the bit extraction technique, achieved by ReLU FNN, but the parameters in this technique need to be very large. Therefore, can your paper provide the bound of parameters? If so, this would be a significant benefit of your paper.
Reply 3: Yes, as the reviewer pointed out, Lu's paper achieves a better dependence on $K$ due to the bit extraction technique, although the parameters could be extremely large. However, in the quantum strategy presented in our work, we do not use the technique of bit extraction but implement a quantum circuit to achieve the Taylor coefficients with parameters ranging between $0$ and $2\pi$ since they represent angles for parameterized rotation gates.
> 4. The structure of the neural network is hard to train since it is not wide but deep when $K$ is large. Can you make the neural network shallower and wider?
Reply 4: We thank the reviewer for highlighting this issue. Our PQC structure does not involve a tradeoff between width and depth like deep neural networks, and thus we cannot easily make PQCs shallower and wider. However, there is a fundamental difference between PQC and classical neural networks, i.e., wider PQCs are harder to train from the perspective of gradients because of the Barren Plateaus phenomenon [1]. For classical deep neural networks, the gradient can vanish exponentially with the number of layers (depth), while for quantum neural networks, the gradient decreases exponentially with the number of qubits (width). Therefore, we believe it is unnecessary to make our PQCs shallower and wider from the perspective of gradients.
[1] J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush, and H. Neven, Barren Plateaus in Quantum Neural Network Training Landscapes, Nat Commun 9, 4812 (2018).
> 5. I think some tables and comments in the appendix can be shown on the main page such as Table S1.
Reply 5: Thank you for the suggestion. We will consider moving some of the content from the appendix to the main text in the revised version, as the reviewer suggested.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply, I will increase my score to 7. | Summary: The authors explore the power and limitations of parameterized quantum circuits (PQCs).
They show that a large class of multivariate polynomials and smooth functions can be efficiently (approximately) represented by PQCs.
More importantly, they show that the requirements of such PQCs compare favorably to their classical counterparts (deep ReLu nets, etc.)
Strengths: The main strength of the paper is that it addresses an important problem of practical and theoretical interest regarding the power and limitations of parameterized quantum circuits.
The results are novel, and the highlighted open problems are of great interest.
Weaknesses: See questions below.
Technical Quality: 3
Clarity: 4
Questions for Authors: Page 1:
Maybe you can early on present a figure depicting some PQC (and a deep ReLu net, for comparison)?
This could help the reader relate to the subject matter.
Page 2:
Can PQCs trivially simulate classical deep learning nets?
Page 3:
Should you define mixed states for comparison with the pure states?
Should you emphasize that the chosen basis is complete?
Page 4:
How is Equation (1) derived?
Equation (2) captures your notion of expressibility, right?
Page 5:
Can you please add some more discussion about Theorem 2?
Page 6:
Can you please elaborate on Lines 220 -- 221?
Page 7:
Please elaborate on the caption of Figure 2.
Line 263:
Why is such an $f$ interesting?
Page 8:
Figure 3:
Please elaborate on the caption.
Figure 4:
What is the moral regarding K? The higher the better? :)
Page 9:
Discussion is good.
Please add some more future work/next steps.
Please do not wait until the end of the paper to state that your work is (???) the first of its kind :)
What is the relationship of your work to (classical or quantum) computational complexity, and Turing machines? Please provide some discussion about how your work connects to the theory of computation.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing the contributions and novelty of our work! Here we respond to the insightful comments and questions raised by the reviewer.
> 1. Page 1: Maybe you can early on present a figure depicting some PQC (and a deep ReLu net, for comparison)? This could help the reader relate to the subject matter.
Reply 1: Thank you for your kind suggestion. We will consider including a figure as suggested in the revised version.
> 2. Page 2: Can PQCs trivially simulate classical deep learning nets?
Reply 2: Thanks for this insightful question! We believe that PQCs cannot trivially simulate classical deep neural networks. Using PQCs to simulate a classical neural network requires efficiently encoding classical data and implementing non-linear activation functions. Specifically, costly amplitude encoding techniques, such as QRAM, are necessary to encode classical data. Additionally, while our PQCs can properly approximate the non-linear activation function, applying this PQC element-wise to a pure quantum state is computationally challenging. In this work, we design novel PQCs to approximate functions with non-asymptotic error bounds by implementing various polynomials rather than directly simulating classical deep neural networks.
> 3. Page 3: Should you define mixed states for comparison with the pure states? Should you emphasize that the chosen basis is complete?
Reply 3: Since mixed states are not used in our technical discussion, we did not define this concept. We will consider adding the definitions as suggested in the revised version.
> 4. Page 4: How is Equation (1) derived? Equation (2) captures your notion of expressibility, right?
Reply 4: Equation (1) defines a PQC with data re-uploading structures as shown in [11], consisting of interleaved data-encoding circuit blocks and trainable circuit blocks. A formal definition and introduction to such data re-uploading PQCs can be found in the Appendix (Sec A.2). Equation (2) encapsulates our notion of expressibility. Formally, expressibility is captured by the class of functions that Equation (2) can approximate by adjusting its trainable parameters.
> 5. Page 5: Can you please add some more discussion about Theorem 2?
Reply 5: Theorem 2 essentially characterizes the universal approximation property of PQCs, corresponding to the celebrated universal approximation theorem of classical neural networks. We will add several sentences in line 187: "Theorem 2 serves as the quantum counterpart to the universal approximation theorem of classical neural networks. Moreover, the PQCs in Theorem 2 are explicitly constructed without any impractical assumptions..."
> 6. Page 6: Can you please elaborate on Lines 220 -- 221?
Reply 6: To elaborate on Lines 220-221, we will add the following sentence in Line 218: "The motivation of localization is to determine the local point, i.e., $x_0$, in Equation (8). An intuitive configuration is illustrated in Figure 2, where the stars represent the local points. To implement this localization process, a straightforward method is to realize a step function $D'(x)=\frac{k}{K}$ if $x\in[\frac{k}{K}, \frac{k+1}{K}]$ for $k=0, 1, \dots, K-1$. To approximate this discontinuous step function with polynomials, we exclude the $\Delta$-neighborhood of the discontinuous points and approximate the remaining parts by polynomials."
> 7. Page 7: Please elaborate on the caption of Figure 2.
Reply 7: After clarifying the motivation for the localization step, we believe that Figure 2 will be more comprehensible..
> 8. Line 263: Why is such an $f$ interesting?
Reply 8: We emphasize that this work investigates the approximation capability of PQCs from a theoretical aspect, and the purpose of the numerical results is to support the theoretical findings. We select a bivariate polynomial function as the target function because it belongs to the function space studied in this context and is also easier to visualize, as shown in Figure 4. Also, there are situations where, despite the PQC's powerful expressibility, an imperfect optimization algorithm results in poor learning performance. Therefore, to reduce the impact of optimization imperfections on the final error, we choose a relatively simple polynomial function as our target for approximation.
> 9. Page 8: Figure 3: Please elaborate on the caption.
Reply 9: We will add the following sentence to the caption of Figure 3: "The single-qubit PQCs trained to approximate the target localization function $D(x)$ for $K=2$ and $K=10$".
> 10. Figure 4: What is the moral regarding K? The higher the better? :)
Reply 10: Yes, with increasing $K$, the PQC exhibits a smaller approximation error as expected, which is consistent with our theoretical findings. We will add relevant discussion in the revised version.
> 11. Page 9: Discussion is good. Please add some more future work/next steps. Please do not wait until the end of the paper to state that your work is (???) the first of its kind :)
Reply 11: Thanks for your comments. We will make corresponding changes in the revised version according to your suggestions.
> 12. What is the relationship of your work to (classical or quantum) computational complexity, and Turing machines? Please provide some discussion about how your work connects to the theory of computation.
Reply 12: Thanks for this interesting question! The complexity theory primarily addresses computational problems, whereas our work is centered on approximation problems. Although our results demonstrate that the size of PQCs is smaller than that required by some neural networks when approximating certain types of functions, we cannot draw definitive conclusions about the complexity of classical or quantum computing based on our findings. This is because both neural networks and PQCs are specific learning models that cannot fully characterize the computational power of classical and quantum computers.
---
Rebuttal Comment 1.1:
Comment: Thank you! :) | Summary: In this work, the authors aim to build a theoretical understanding of parameterized quantum circuits via non-asymptotic approximation error performance analysis. In particular, they demonstrate the advantages of PQCs over classical ones if specific smoothness criteria can be satisfied. The simulation results can further corroborate their theoretical understanding.
Strengths: 1. The paper provides a different theoretical perspective of understanding PQCs using a non-asymptotic viewpoint in addition to the universal approximation theory.
2. Their established non-asymptotic PQC approximation error bounds are technically correct
3. The simulation results can corroborate the theoretical results.
Weaknesses: 1. The authors' theoretical analysis of PQCs relies on the assumption of continuous or smooth target functions, which hinders the theoretical usage from a broader case of non-smooth and non-continuous target functions.
2. The numerical simulation is less convincing than practical machine learning datasets, so the actual machine learning applications are expected to validate the proposed theory.
3. the experimental validation could be designed to corroborate the theoretical results, e.g., the target function relying on Holder's discussions on smooth and continuous functions.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. how can the authors generalize the proposed theory to those cases if the target function is non-smooth and non-continuous?
2. What is the most significant advantage of the non-asymptotical analysis on PQCs compared to the universal approximation theory?
3. Is there a different or new discovery of the PQC's setup if using the authors' new theoretical perspective?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. The proposed theorems rely on continuous or smooth target functions, which hinders their application to broader use cases.
2. The numerical simulations cannot corroborate the specific smooth and continuous functions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and valuable comments. We would like to provide a detailed response to the questions raised by the reviewer.
> 1. The authors' theoretical analysis of PQCs relies on the assumption of continuous or smooth target functions, which hinders the theoretical usage from a broader case of non-smooth and non-continuous target functions. How can the authors generalize the proposed theory to those cases if the target function is non-smooth and non-continuous?
Reply 1: First, we would like to point out that continuous functions are typical classes of target functions in the study of learning theory. For example, the most celebrated universal approximation theorems for neural networks involve approximating continuous functions. This is because machine learning data points in practical scenarios are discrete and can often be fitted into continuous functions. Second, the Hölder smooth functions considered in our work constitute a large class of continuous functions, but they are not necessarily **infinitely smooth**. By the definition in Eq. (7) and the following explanation, the smoothness index of the Hölder class can vary in the range $(0, \infty)$. From a theoretical perspective, it is certainly of great interest to study the approximation of non-continuous functions, as the reviewer suggested. One possible way to generalize would be to first use continuous functions to approximate non-continuous functions, followed by the analysis in our work. Additionally, in the appendix, we introduce the construction of PQCs that implement multivariate trigonometric polynomials, which can be used to approximate any square-integrable functions (not necessarily continuous). However, to the best of our knowledge, deriving a quantitative error bound for approximating general non-continuous functions is highly non-trivial and thus requires further investigation.
> 2. The numerical simulation is less convincing than practical machine learning datasets, so the actual machine learning applications are expected to validate the proposed theory.
Reply 2: We would like to emphasize that the main contribution of our work lies in the theoretical aspect of PQC approximation performance. The numerical simulation in our work is used to validate our theoretical results, which is about using PQC to approximate continuous and smooth functions. For practical machine learning datasets, there is no such target function well defined but only discrete data points, so it is not intuitive to use such "datasets" to numerically validate our theory of "function" approximation. Additionally, the dimensions of real datasets are typically very large, making it impossible to train such a huge PQC model on our computers.
> 3. The experimental validation could be designed to corroborate the theoretical results, e.g., the target function relying on Hölder's discussions on smooth and continuous functions.
Reply 3: As we explained above, we designed the numerical experiment to validate our theorem. Thus, we chose a target function from the class of Hölder smooth functions that we study in the theory.
> 4. What is the most significant advantage of the non-asymptotical analysis on PQCs compared to the universal approximation theory?
Reply 4: The universal approximation theorem only demonstrates the existence of PQCs that can approximate the target function within some arbitrary error. However, it does not provide the construction of the circuit and its size as a function of the error and, therefore, cannot quantify the approximation performance. The non-asymptotic analysis of PQCs for QML in our work gives the explicit construction of PQCs and quantitative approximation error bound of PQCs, which characterizes the approximation performance of PQCs in terms of circuit size and the number of parameters. More importantly, such non-asymptotic analysis of PQCs makes it possible to compare their performance with that of classical neural networks, potentially revealing quantum advantages.
> 5. Is there a different or new discovery of the PQC's setup if using the authors' new theoretical perspective?
Reply 5: Yes, one of the novel contributions of our work is the setup of PQCs using quantum signal processing circuits and linear combination of unitaries circuits, which are first utilized in PQC constructions. Also, in Section 3.3 of PQC approximation for Hölder smooth functions, we propose a new construction called "nested PQC" that feeds the results of the first PQC to the second PQC. This new structure provides better approximation performance than traditional constructions and is therefore worth further study.
Overall, we would like to clarify that our main contribution is the theoretical analysis of PQC approximation performance for a wide class of functions that are typically studied in learning theory and approximation theory. We intend to use a simple toy example in the numerical experiment to validate and better illustrate our theoretical results, which is not our main focus in this work. We thank the reviewer again for pointing out the other possible classes of functions, like non-continuous functions, that are worth further investigation.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' responses to my concerns. I raise the score to 7 (accept). | Summary: This paper studies the expressiveness of parameterized circuits to perform multivariate function approximation. This serves as a quantum counterpart to the theoretical results of classical machine learning, namely the universal approximation theorem. Theoretical results provide bounds on the approximation error of parameterized circuits with bounds on the quantum resources required. Finally, numerical experiments are provided, showing that parameterized quantum circuits are able to approximate multivariate functions with improved accuracy as the number of parameters increases.
Strengths: - This paper uses ideas from quantum signal processing to study the expressivity of parameterized quantum circuits, thereby serving as a first step for bridging the gap between quantum algorithms for matrix functions and quantum machine learning. This is a very natural idea, and is better motivated than many other ansatz used in variational quantum algorithms. Generally speaking, I think it is valuable for works in (quantum) machine learning to be guided by results with theoretical guarantees as this typically leads to better algorithm design.
- The overall paper is well-written and clear to understand.
- The numerical experiments appear promising and supplement the theoretical results well, although the parameter counts are fairly large even for the small test-cases presented.
Weaknesses: - The use of ideas from quantum signal processing is a bit of a double-edged sword. While Theorem 3 provides a non-asymptotic error bound, the quantum resources (circuit width and depth) are only asymptotic bounds. Presumably, the resources required for performing the LCU circuit are not very practical for NISQ devices, and it is likely that there are significant improvements to be made.
- It would be nice to include code for the numerical experiments.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In the second paragraph of Sec 3.1, why is the number of parameters $s+d$, rather than simply $s$? If $d>s$, then only $s$ variables are relevant.
- In Sec 3.2, why were Bernstein polynomials chosen over other polynomial approximations, such as truncated Taylor series expansions? Are there other ways, and if so, what are the drawbacks?
- In Theorem 2, how does $n$ depend on $\epsilon$? I think it would make more sense to express the resource costs in terms of $1/\epsilon$.
- In Theorem 3, why does $n$ no longer depend on $\epsilon$? Does it mean that both $
\epsilon$ and $n$ could be simultaneously chosen to be arbitrarily small?
- I am curious about how the PQCs were implemented for the numerical experiments. I assume analytical formulas for the approximating polynomial were used, rather than full circuit implementations with the overhead of LCU, right?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: There are no major negative societal impacts of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the appreciation of the paper and inspiring feedback. Here we respond to the insightful comments and questions.
> 1. The use of ideas from quantum signal processing is a bit of a double-edged sword. While Theorem 3 provides a non-asymptotic error bound, the quantum resources (circuit width and depth) are only asymptotic bounds. Presumably, the resources required for performing the LCU circuit are not very practical for NISQ devices, and it is likely that there are significant improvements to be made.
Reply 1: For the construction of our PQCs, the error bound and quantum circuit size are typically non-asymptotic, except in the step of decomposing multi-qubit gates into one- and two-qubit gates. Specifically, in the LCU procedure, constructing a larger-scale unitary block that encodes the summation of multiple unitary matrices requires applying multi-qubit control to each unitary matrix. To achieve this on a quantum computer, we utilize techniques from Ref. [1] to decompose the multi-qubit control gate into a linear-depth quantum circuit composed of CNOT gates and single-qubit rotation gates without using any ancilla qubit. The quantum resources are asymptotic in this multi-qubit control gate decomposition. These asymptotic quantum resources merely conceal constant or trivial terms, which do not impact the feasibility of our construction on quantum devices. Furthermore, decomposing multi-qubit gates to one- and two-qubit gates facilitates fair comparisons with classical neural networks in terms of network width and depth.
[1] A. J. da Silva and D. K. Park, Linear-depth quantum circuits for multiqubit controlled gates, Physical Review A 106.4 (2022).
> 2. It would be nice to include code for the numerical experiments.
Reply 2: Thanks for asking, actually all the code has been uploaded to the Supplementary Material accompanying this work, along with instructions on how to run the code.
> 3. In the second paragraph of Sec 3.1, why is the number of parameters $s+d$, rather than simply $s$? If $d>s$, then only $s$ variables are relevant.
Reply 3: Thanks for pointing it out. Here, in Sec 3.1, we do not have any assumption about $d>s$ as we aim to have a general analysis of PQC construction. Quantum signal processing (QSP) indicates that expressing an $s$-degree polynomial requires $s+1$ parameters. Therefore, when $s>d$, expressing an $s$-degree $d$-variable polynomial, with each variable having a non-zero degree, necessitates $s+d$ parameters. Conversely, when $d>s$, at most $2s$ parameters are needed to express an $s$-degree $d$-variable polynomial. Overall, at most $s+d$ parameters are required to express an $s$-degree $d$-variable polynomial.
> 4. In Sec 3.2, why were Bernstein polynomials chosen over other polynomial approximations, such as truncated Taylor series expansions? Are there other ways, and if so, what are the drawbacks?
Reply 4: In Sec 3.3, we do utilize truncated Taylor expansions for polynomial approximations, as presented in Theorem 4. We selected the Bernstein polynomial due to its ease of implementation and comprehensive theoretical results. Bernstein polynomials are considered "global" polynomials because they approximate functions over an entire interval without focusing on specific local behavior, which facilitates implementation through QSP. However, a drawback of global polynomials is their relatively poor approximation error, as illustrated in Figure 1. In contrast, Taylor series expansions are "local" polynomials that approximate functions based on local information at specific points. The approximation process involves identifying local points for expansion and using these points to construct polynomials. This locality allows for more precise approximations, as illustrated in Figure 1, making them a central approach in classical machine learning approximation theory. Based on these principles, we designed the PQC with a nested architecture. PQCs implementing local Taylor expansions yield better approximation errors, demonstrating an advantage even over near-optimal classical neural networks, as shown in our discussion part.
> 5. In Theorem 2, how does $n$ depend on $\epsilon$? I think it would make more sense to express the resource costs in terms of $1/\epsilon$.
Reply 5: In Theorem 2, we present a universal approximation theorem (UAT) for PQCs, highlighting the existence of a PQC capable of approximating any continuous function, but we do not explore the relationship between $n$ and $\epsilon$. Similarly, in classical learning theory (as in [13]), the UAT simply guarantees the existence of a neural network for approximating any continuous function within arbitrary error but does not imply the relationship between the network size and approximation error. All non-asymptotic results are detailed in Theorems 3 and 4, which go beyond UAT by providing quantitative error bound in terms of circuit size.
> 6. In Theorem 3, why does $n$ no longer depend on $\epsilon$? Does it mean that both $\epsilon$ and $n$ could be simultaneously chosen to be arbitrarily small?
Reply 6: In Theorem 3, epsilon does not represent the approximation error. As shown in Equation (6), the approximation error $\epsilon+d2^d\frac{\ell^2}{n\epsilon^2}$ is determined by both $\epsilon$ and $n$. Choosing both $\epsilon$ and $n$ to be small may result in a large approximation error.
> 7. I am curious about how the PQCs were implemented for the numerical experiments. I assume analytical formulas for the approximating polynomial were used, rather than full circuit implementations with the overhead of LCU, right?
Reply 7: Yes, we did not implement the heavy LCU procedure in our experiments. There is no difference between adding monomials together and implementing the linear combination of monomial PQCs when we conduct numerical simulations, so we go for an easier way.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Jointly Modeling Inter- & Intra-Modality Dependencies for Multi-modal Learning | Accept (poster) | Summary: The proposed inter- & intra-modality modeling (I2M2) framework addresses the limitations of conventional approaches in supervised multi-modal learning. By considering both inter-modality dependencies and intra-modality dependencies, it achieves superior performance in predicting labels. The I2M2 framework offers adaptability without prior knowledge of dependency strength and demonstrates improved accuracy compared to methods focusing solely on one type of modality dependency.
Strengths: 1. The I2M2 framework captures and integrates both inter-modality and intra-modality dependencies, leading to better predictions and enhanced model performance.
2. Without requiring prior knowledge of dependency strength, the I2M2 framework effectively models inter- and intra-modality dependencies, making it versatile and adaptable in different scenarios.
3. Experimental evaluations on real-world datasets demonstrate the I2M2 framework's superiority over traditional methods that focus on only one type of modality dependency. By considering both inter- and intra-modality dependencies, the I2M2 framework achieves higher accuracy in multi-modal learning tasks
Weaknesses: See questions
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. I am confused about the implementation details of the proposed method, particularly the simulation of q_{x, x'}(y|x, x') or q_{x}(y|x) in Equation 4/5. It is important for the authors to provide a clear description of how these probabilities are computed and how they contribute to the overall model's functionality.
2. A lots of multi-modality models have been proposed to explore inter-modal and intra-modal information. A classic framework involves learning to concatenate two embeddings: one for inter-modal information, trained jointly across modalities, and one for intra-modal information, trained separately. What is the superiority of the proposed model compared to this approach?
3. The experiment results indicate a consistent pattern across all three datasets, where inter-modality methods perform comparably to the proposed model, while intra-modal methods exhibit significantly worse performance. This raises the question of whether the intra-modal approach is meaningful in processing multi-modal data. The authors should provide a thorough analysis and discussion to address this finding.
4. As is claim in INTRODUCTION part, the proposed method aims to uncover the factors behind performance differences between multi-modal and uni-modal learners. Additional empirical expriment and analysis are necessary to understand how the proposed method explains the observed disparities.
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: See questions
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for your review and your thoughtful comments. We are glad that you find our method versatile and adaptable to different scenarios. We address your concerns below:
> I am confused about the implementation details of the proposed method, particularly the simulation of q_{x, x'}(y|x, x') or q_{x}(y|x) in Equation 4/5.
As explained in section 3.1, our model generates a probabilistic score through four components derived from the joint distribution in equation 1: $p(\mathbf{y})$, which reflects the softmax bias; the intermediate terms, which correspond to predictive models for each individual modality; and the final term, which represents the multimodal predictive model, incorporating combinations of modalities and the label. Each of these values is a positive scalar. In Equation 2, we multiply them with the denominator as a normalizing constant that does not depend on the label $\mathbf{y}$.
The notation $q$ for instance $q_{\mathbf{x}}(\mathbf{y} \mid \mathbf{x})$ is used to denote the function that maps $ (\mathbf{y}, \mathbf{x}) $ to a positive scalar. This function is proportional to $p(\mathbf{x} \mid \mathbf{y})$ because $ p(\mathbf{y} \mid \mathbf{x}) = \frac{p(\mathbf{x} \mid \mathbf{y}) p(\mathbf{y})}{p(\mathbf{x})}$. Thus, by defining $ q_{\mathbf{x}}(\mathbf{y} \mid \mathbf{x})$ as proportional to $ p(\mathbf{x} \mid \mathbf{y}) $, we have simply renamed $ p(\mathbf{y} \mid \mathbf{x}) $ for convenience. Specifically, $ q_{\mathbf{x}}(\mathbf{y} \mid \mathbf{x}) = c \cdot p(\mathbf{x} \mid \mathbf{y})$, where $c$ is a constant that normalizes the function representing the interaction between $\mathbf{y}$ and $ \mathbf{x}$. We will further articulate this clearly in the revised manuscript.
---
> A lots of multi-modality models have been proposed to explore inter-modal and intra-modal information. A classic framework involves learning to concatenate two embeddings: one for inter-modal information, trained jointly across modalities, and one for intra-modal information, trained separately. What is the superiority of the proposed model compared to this approach?
We would appreciate it if the reviewer could provide specific references to the papers that describe the framework described in your response. To the best of our knowledge, all prior work (most of which is cited in our paper) focuses on either intra- or inter-modality dependencies, but not both. The method described by the reviewer aligns with our approach, where the model output is $\exp(W_x h(x) + W_x' h'(x) + W_{x, x'} h''(x, x'))$. This is essentially equivalent to the product (or a log-ensemble) of these individual components. We fine-tune this full model end-to-end, rather than simply concatenating the outputs of pre-trained models.
---
> The experiment results indicate a consistent pattern across all three datasets, where inter-modality methods perform comparably to the proposed model, while intra-modal methods exhibit significantly worse performance. This raises the question of whether the intra-modal approach is meaningful in processing multi-modal data.
The fact that inter-modality modeling is not the best approach for all multimodal tasks indicates that intra-modality dependencies are beneficial, and conventional methods are not the most effective. The impact of intra-modality dependencies is demonstrated through our approach, which leverages both inter- and intra-modality dependencies, resulting in across-the-board improvements in aggregate metrics across multiple tasks with diverse characteristics. Specifically, both dependencies are pertinent for the AV-MNIST and MIMIC-III datasets, intra-modality dependencies prove more beneficial for FastMRI, and only inter-modality dependencies are relevant for the NLVR2 dataset.
---
> As is claim in INTRODUCTION part, the proposed method aims to uncover the factors behind performance differences between multi-modal and uni-modal learners. Additional empirical expriment and analysis are necessary to understand how the proposed method explains the observed disparities.
We clarify that the goal of this work was to develop a principled approach to solving multi-modal problems that performs consistently well across multiple tasks. This is motivated by the inconsistent performance of conventional methods across different datasets.
Towards that end, we used our data generative process (Figure 1) to explain the performance differences of conventional methods across various datasets. The key takeaway from our work is that if the cross-modality information to predict the label is strong, the relationships between different types of modalities and the label (inter-modality dependency) become more important. Conversely, if the cross-modality information is weak, the dependencies between the individual modalities and the label (intra-modality dependencies) are crucial. Without prior knowledge of the strengths of these dependencies, existing methods often make incomplete assumptions about the datasets, resulting in sub-optimal performance. Our proposed method effectively addresses this by capturing both inter- and intra-modality dependencies. We validated the efficacy of our framework using a battery of experiments involving modalities of completely different nature, including images, text, audio, and even signals in frequency space acquired in healthcare. We would really appreciate if you could clarify what additional specific experiments and analysis is necessary to improve our work.
---
Thank you again for your time and efforts in reviewing our paper, and we hope that you will consider raising your score if you find our response satisfactory.
Thank you,
Authors
---
Rebuttal 2:
Title: Official Comment of Submission7139 by Reviewer qKeb
Comment: Thanks for the author's response, I still have a few questions here:
1. For the first question, the author may have misunderstood my question. My question is actually how do you estimate these conditional probabilities in your experiments or other real-world scenarios? I have re-checked the paper and it does not explain these specific experimental details clearly.
2. In fact, designing an intra-modal module and an inter-modal module to jointly solve multi-modal problems has been proposed in many fields, and I have simply listed some literature for the author's reference:
```
[1] Verma, Sunny, et al. "Deep-HOSeq: Deep higher order sequence fusion for multimodal sentiment analysis." 2020 IEEE international conference on data mining (ICDM). IEEE, 2020.
[2] Nagrani, Arsha, et al. "Attention bottlenecks for multimodal fusion." Advances in neural information processing systems 34 (2021): 14200-14213.
[3] Huang, Po-Yao, et al. "Improving what cross-modal retrieval models learn through object-oriented inter-and intra-modal attention networks." Proceedings of the 2019 on International Conference on Multimedia Retrieval. 2019.
[4] Chen, Ruihan, et al. "DPHANet: Discriminative Parallel and Hierarchical Attention Network for Natural Language Video Localization." IEEE Transactions on Multimedia (2024).
[5] Liang, Meiyu, et al. "Self-Supervised Multi-Modal Knowledge Graph Contrastive Hashing for Cross-Modal Search." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 12. 2024.
[6] Wu, Yue, et al. "Self-supervised intra-modal and cross-modal contrastive learning for point cloud understanding." IEEE Transactions on Multimedia 26 (2023): 1626-1638.
```
3. I have noticed the experimental results in the FastMRI dataset, But this experiment is in comparison to the intra-modality methods proposed by other people. As for the proposed intra-modal module in this paper, I have to mention that the results can only show the superiority of the inter-modal module in the ablation study of Tables 2 & 3.
4. Thanks for your reply, can I understand that your method can effectively measure the relationship between different modes and the dependence between modes? However, this property has not been reflected in subsequent experiments, and we hope to see some more valuable experimental results, rather than just the improvement of algorithm performance.
---
Rebuttal 3:
Title: Thank you for your response [1]
Comment: Dear reviewer,
Thanks for your time and reply. We address your remaining concerns below:
> For the first question, the author may have misunderstood my question. My question is actually how do you estimate these conditional probabilities in your experiments or other real-world scenarios? I have re-checked the paper and it does not explain these specific experimental details clearly.
The conditional probabilities are estimated by training these predictive models using the maximum likelihood principle. These probabilities are obtained from the softmax of the logits produced by these predictive models (refer to line 134).
---
> In fact, designing an intra-modal module and an inter-modal module to jointly solve multi-modal problems has been proposed in many fields, and I have simply listed some literature for the author's reference:
We emphasize that our method is fundamentally different from all the referenced methods. Based on the definitions in our paper, all these methods capture either inter- or intra-modality dependencies, but not both. We distinguish our methodology from all the papers you referenced below:
1. Deep-HOSeq: Deep higher order sequence fusion for multimodal sentiment analysis by Verma, Sunny et al., focuses on inter-modality modeling by our definition. Both Equation 2 and Equation 4 in their work focus on inter-modality, employing different parameterizations, as both equations apply non-linearity to the concatenated unimodal representation $T_{Val}$.
2. In "Attention Bottlenecks for Multimodal Fusion" by Nagrani et al., referring to Figure 1, late fusion corresponds to intra-modality modeling. The other approaches depicted are simply different parameterizing schemes for inter-modality modeling within our framework. In contrast to their method, we propose that it is important to capture both inter and intra-modality dependencies.
3. Equations 9, 10, and 11 are the most important in the paper "Improving what cross-modal retrieval models learn through object-oriented inter-and intra-modal attention networks" by Huang, Po-Yao, et al., but unfortunately, they are not well-formed and factually incorrect, making them difficult to interpret. It's unclear which variables $v$ and $t$ are being summed over in these equations. If summation is indeed intended over $v$ and $t$, then Equations 9 and 10 would not logically depend on $v$ and $t$, which introduces further confusion and undermines their correctness.
4. The paper "DPHANet: Discriminative Parallel and Hierarchical Attention Network for Natural Language Video Localization" by Chen, Ruihan, et al. clearly fits our definition of an inter-modality model. Figure 2 and the corresponding sections in the paper show that the concatenation of cross-modal interaction and intra-modal self-correlation is used as input to convolution layers with non-linearities and transformer blocks, thereby only focusing on inter-modality dependencies.
5. In the paper self-Supervised Multi-Modal Knowledge Graph Contrastive Hashing for Cross-Modal Search. by Liang, Meiyu, et al., there are separate labels for each modality as well as for the multi-modal case, which differs from our problem setup and methodology. They construct three distinct label sets based on textual similarity, image similarity, and a third set that combines these similarities (all derived from a knowledge graph). Their approach focuses on creating representations to solve three different problems independently, rather than using different types of dependencies that contribute to solve a single problem.
6. The paper "Self-Supervised Intra-Modal and Cross-Modal Contrastive Learning for Point Cloud Understanding" by Wu, Yue, et al. aligns with our definition of intra-modality modeling. The model does not capture any inter-modality dependencies, as defined in our framework. The contrastive loss function in Section III.D is a linear combination between the modality representations, as it simply evaluates the similarity between the two modality representations without capturing the non-linear interactions or dependencies between the modalities.
> I have noticed the experimental results in the FastMRI dataset, But this experiment is in comparison to the intra-modality methods proposed by other people. As for the proposed intra-modal module in this paper, I have to mention that the results can only show the superiority of the inter-modal module in the ablation study of Tables 2 & 3.
Could you clarify what you mean by "intra-modality methods proposed by other people" in the context of the fastMRI dataset, and your statement that “results can only show the superiority of the inter-modal module in the ablation study of Tables 2 & 3”?
In our experiments, we consistently use the intra-modality module from Section 3.3 across all datasets, including fastMRI, and our results demonstrate that incorporating this module with I2M2 leads to better performance.
---
---
Rebuttal 4:
Title: Thank you for your response [2]
Comment: > Thanks for your reply, can I understand that your method can effectively measure the relationship between different modes and the dependence between modes? However, this property has not been reflected in subsequent experiments, and we hope to see some more valuable experimental results, rather than just the improvement of algorithm performance.
We've included additional experiments in the appendix, such as how the entropy changes and detailed performance comparisons of all methods under OOD distribution shifts. These may not have been fully evident in the main content and we will incorporate them into the main paper in the final revision. Do you have any further suggestions for other valuable experiments we should conduct?
---
Thank you,
Authors
---
Rebuttal Comment 4.1:
Title: Official Comment of Submission7139 by Reviewer qKeb
Comment: Thanks for your detailed response, most of my concerns have been solved, I will upgrade my scores. But I have to mention that some descriptions in the paper are not clear enough (other reviewers also have low confidence), and hope to see a more clear expression in your final version. | Summary: The authors present a novel framework, I2M2 (Inter- & Intra-Modality Modeling), designed to enhance supervised multi-modal learning by effectively leveraging multiple modalities. This framework captures both the relationships between different modalities (inter-modality dependencies) and within a single modality (intra-modality dependencies) concerning a target label. The multi-modal learning problem is approached through the lens of generative models, with the target label influencing multiple modalities and their interactions.
Extensive experimental results have validated the effectiveness of I2M2 using real-world datasets from the healthcare and vision-and-language domains, demonstrating its superiority over traditional methods that focus on a single type of modality dependency.
Strengths: 1. The paper introduces a novel framework, I2M2, which marks a significant departure from traditional approaches in multi-modal learning that typically focus on either inter- or intra-modality dependencies in isolation.
2. By adopting a generative model perspective to understand multi-modal data and proposing a new data-generating process, the authors offer a fresh outlook on a well-studied problem.
Weaknesses: 1. Further experimental analyses could be strengthened, especially for the selection variable and the strength of the selection mechanism across datasets.
2. While the paper compares I2M2 with traditional methods, a more comprehensive comparison with the latest state-of-the-art methods in multi-modal learning would better situate the framework within the current research landscape.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. I'm curious how the strength of the selection mechanism varies and impacts across datasets, and how this strength is determined?
2. Why does I2M2 seem to have more significant superiority in OOD scenarios, and is there an effect of data distribution differences on I2M2?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors have mentioned some limitations, such as the increased computational cost with more modalities and challenges in model initialization. However, it would be beneficial if they could provide a more detailed discussion of these limitations, including any potential workarounds or future research directions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for your review and your thoughtful comments. We appreciate your recognition of our paper as providing a fresh perspective on multi-modal learning. We address your concerns below:
> Further experimental analyses could be strengthened, especially for the selection variable and the strength of the selection mechanism across datasets.
> I'm curious how the strength of the selection mechanism varies and impacts across datasets, and how this strength is determined?
> Why does I2M2 seem to have more significant superiority in OOD scenarios, and is there an effect of data distribution differences on I2M2?
We assume that the selection variable is always present but its strength varies among different datasets. It is important to note that the strength cannot be determined explicitly. However, if in a dataset the selection effect is strong, the relationships between different types of modalities and the label (inter-modality dependency) are more important. If the strength is weaker, the dependencies between the individual modalities and the label (intra-modality dependencies) are crucial. Each dataset in our paper exhibits different characteristics: Intra-modality dependencies are more beneficial for fastMRI dataset, while inter-modality dependencies are more relevant for NLVR2 dataset. Both dependencies are pertinent for the AV-MNIST, MIMIC-III and VQA datasets.
A key point to note is that we typically do not have prior knowledge of the dependency strength between modalities for classification. Our proposed I2M2 framework considers both intra- and inter-modal dependencies and performs better than using either of them alone. This gap widens in the out-of-distribution (OOD) experiments (see Appendix C), as the strength of the selection effect can often shift for OOD distributions. In scenarios where models depend heavily on one type of modality dependency, these models often become less reliable when faced with changes in data distribution. In contrast, we observed consistent improvements with I2M2 across all settings.
---
> While the paper compares I2M2 with traditional methods, a more comprehensive comparison with the latest state-of-the-art methods in multi-modal learning would better situate the framework within the current research landscape.
We would like to emphasize that we indeed considered the SOTA models for ALL the datasets analyzed (see experimental setups of all the datasets). Specifically, for AV-MNIST and MIMIC-III, we selected the top-performing models from MultiBench [1], which recommends these datasets as benchmarks for evaluating multimodal learning capabilities. For FastMRI, our reference is the work by Madaan et al., [2] the only study we identified that focuses on diagnosis for this task. Furthermore, for VQA and NLVR2, we utilized the FIBER model as used in the recent studies [3,4], which achieves state-of-the-art performance on both datasets.
---
> The authors have mentioned some limitations, such as the increased computational cost with more modalities and challenges in model initialization. However, it would be beneficial if they could provide a more detailed discussion of these limitations, including any potential workarounds or future research directions.
While we have discussed potential workarounds and future research directions in the limitations and future work section of our work, we reiterate them here and will elaborate on these points further in the final revision:
For a large number of modalities, we propose processing all modalities using a single encoder, with a null token to indicate the presence or absence of each modality. For each example, we randomly select a subset of $k$ combinations of conditional probabilities. The model is then constructed based on either the product or the sum of logarithms of these $k$ conditional probabilities. This approach aims to keep the number of parameters linear, thereby managing complexity effectively.
For model initialization, there are optimization challenges in training multi-modal models from scratch that are not yet fully understood. We believe that investigating these challenges and developing end-to-end training methods is a promising area for future research.
---
Thank you again for your time and efforts in reviewing our paper, and we hope that you will consider raising your score if you find our response satisfactory.
Thank you,
Authors
---
References.
[1] Liang et al., 21 MultiBench: Multiscale Benchmarks for Multimodal Representation Learning
[2] Madaan et al., 23 On Sensitivity and Robustness of Normalization Schemes to Input Distribution Shifts in Automatic MR Image Diagnosis.
[3] Dou et al., 22 Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone.
[4] Makino et al., 23 https://openreview.net/forum?id=QoRo9QmOAr. | Summary: Previous supervised multi-modal learning involves mapping multiple modalities to a target label, with previous studies focusing separately on either inter-modality or intra-modality dependencies. This approach may not be optimal, so the proposed inter- & intra-modality modeling (I2M2) framework captures and integrates both dependencies for more accurate predictions. Evaluation using real-world healthcare and vision-and-language datasets shows that the proposed method outperforms traditional methods that focus on only one type of modality dependency.
Strengths: 1. The paper is relatively well-written and easy to understand.
2. The proposed method performs well on real-world healthcare and vision-and-language datasets, indicating its potential to be applied in a broader field.
Weaknesses: 1. It would be better if the authors could also report the computational complexity of the proposed method.
2. For visual language modeling, one of the recent popular ways is to use CLIP or CLIP-like models for visual language contrastive pertaining to boost accuracy and performance, it would be interesting for the authors to compare their method against the CLIP model on their datasets.
Technical Quality: 3
Clarity: 3
Questions for Authors: See above weakness.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No potential negative societal impact of their work observed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for your review and thoughtful comments. We are glad that you found our paper well-written, easy to understand, and potentially applicable to the broader field. We address your concerns regarding computational complexity and comparison with CLIP below.
> It would be better if the authors could also report the computational complexity of the proposed method.
Computational complexity can refer to either the number of parameters or the training time. In either dimensions, I2M2 only moderately adds to the complexity of the underlying models.
Number of parameters: The total number of parameters in I2M2 is the sum of the paramters of the inter-modal and intra-modal models used, as discussed in the Limitations section and Section 4.6. Furthermore, we show (in Figure 3 and Figure 4 of the paper) that even when the inter and intra-modality models are allocated the same number of parameters as I2M2,, their performance is significantly worse in comparison to I2M2. .
Training time: Even for large-scale datasets and models such as VQAv2 and NLVR2, the additional training time on top of pre-trained models was 16 hours for VQA and 4 hours for NLVR2. For other datasets, the increase in training time was minimal, and I2M2 converged within a few epochs. We emphasize that we only added an additional MLP for large models to keep the computational cost in terms of parameters minimal.
---
> For visual language modeling, one of the recent popular ways is to use CLIP or CLIP-like models for visual language contrastive pertaining to boost accuracy and performance, it would be interesting for the authors to compare their method against the CLIP model on their datasets.
We agree that building multimodal visual language models has become a recent trend. Our proposed approach is compatible with these models, as they work with images, text, or both combined. Therefore, our methodology can be applied to these as well, offering a promising way to initialize the inter and intra-modality models. We leave this investigation for future work.
---
Thank you again for your time and efforts in reviewing our paper, and we hope that you will consider raising your score if you find our response satisfactory.
Thank you,
Authors | Summary: This paper proposes a framework for multi-modal learning called inter- & intra-modality modeling (I2M2). I2M2 can simultaneously capture inter-modality dependencies (relationships between different modalities) and intra-modality dependencies (relationships within a single modality). This approach aims to improve the accuracy of predictions by integrating both types of dependencies, rather than focusing on one. Experiments on health care data verify the advantage of utilizing both inter- and intra-modality.
Strengths: The motivation is clear that both intra- and inter-modality should be considered simultaneously. Experimental results verify this claim.
Weaknesses: As multi-modality learning has been fully studied, it is questionable that there are few comparisons with the existing works. In the introduction, the author states that existing inter-modality modeling methods can technically capture both inter- and intra-modality
dependencies (but with some ineffectiveness), however, this statement is not reflected by the experiments. Therefore, it is questionable how their 'ineffiective' performance compared with the proposed 'effective' method.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. How is the performance compared to existing inter-modality modeling methods, such as that in ref.[15]?
2. How do we deal with multi-modality when the number of modalities is greater than 2? Should we consider the conditional probabilities on any combination of modalities or just modalities between any pairs of modalities? Further, should all the conditional probabilities be equally treated?
3. In the FastMRI experiment, the performance under low SNR k-space data is analyzed. How about the robustness when only one modality is affected? For example, only magnitude modality is disturbed by noise during signal transmission.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for your review and your thoughtful comments. We appreciate you finding our paper well-motivated and claims verified through the experiments. We address your concerns below.
> In the introduction, the author states that existing inter-modality modeling methods can technically capture both inter- and intra-modality dependencies (but with some ineffectiveness), however, this statement is not reflected by the experiments. Therefore, it is questionable how their 'ineffiective' performance compared with the proposed 'effective' method.
To begin with, it's important to acknowledge that in the context of multimodal classification tasks, either the intermodal or intramodal approach may perform better. The challenge lies in the fact that, prior to analysis, the relative importance of these dependencies for classification purposes is often unknown. Our goal in this paper was to develop a principled framework that can better capture both intra- and inter-modal dependencies when solving any task involving multiple modalities. The fact that inter-modality modeling is not the best approach for all multimodal tasks tells us that the conventional way of doing it is not the most effective. The impact of our proposed framework is demonstrated through across-the-board improvements in aggregate metrics across multiple tasks with diverse characteristics. Specifically, both dependencies are pertinent for the AV-MNIST and MIMIC-III datasets, while intra-modality dependencies prove more beneficial for FastMRI, and only inter-modality dependencies are relevant for the NLVR2 dataset.
---
> How is the performance compared to existing inter-modality modeling methods, such as that in ref.[15]?
We provide a performance comparison with UME (mixture of uni-modal experts model) In Figure 3 of our main paper, we compare UME ref. [15] (mixture of uni-modal expert models) against the proposed I2M2 and show that I2M2 outperforms UME across all knee pathologies.
Additionally, we present a comparison with both UME and UMT [15] on AV-MNIST and MIMIC-III datasets below:
| Model | AV-MNIST | Mortality | ICD-9 (140 - 239) | ICD-9 (460 - 519) |
|-------------------|------------------|-----------------|-------------------|-----------------|
| UME | 68.97 ± 0.34 | 77.55 ± 0.26 | 91.42 ± 0.01 | 68.69 ± 0.38 |
| UMT | 71.72 ± 0.27 | 77.04 ± 0.59 | 91.33 ± 0.18 | 66.76 ± 0.82 |
| I2M2 (proposed) | **72.38 ± 0.17** | **78.10 ± 0.17**| **91.58 ± 0.10** | **68.88 ± 0.34**|
Furthermore, we point to a fundamental difference between I2M2 and the methods proposed in ref [15]. Methods in ref[15] either capture intra-modality (UME) or inter-modality (UMT) dependencies. They do not address the existence or necessity of combining both types of dependencies. They suggest using one type of dependency over the other based on their varying strengths. As demonstrated in our paper, the strength of these dependencies varies across different datasets, which we do not have prior knowledge of. Hence, we argue that it is essential to capture both of them.
---
> How do we deal with multi-modality when the number of modalities is greater than 2? Should we consider the conditional probabilities on any combination of modalities or just modalities between any pairs of modalities? Further, should all the conditional probabilities be equally treated?
There is a straightforward way to extend our methodology to more than two modalities, which we discussed in the Limitations section. Specifically, we can process multiple modalities using a single encoder, with a null token to indicate the presence or absence of each modality. For each example, we randomly select a subset of $k$ combinations of conditional probabilities. The model is then constructed based on either the product or the sum of logarithms of these $k$ conditional probabilities. This approach will keep the number of parameters linear, thereby managing complexity effectively.
---
> In the FastMRI experiment, the performance under low SNR k-space data is analyzed. How about the robustness when only one modality is affected? For example, only magnitude modality is disturbed by noise during signal transmission.
We want to clarify that in MRI the data is acquired in the frequency space and the measurements are in complex domain. An inverse Fourier transform of these complex measurements produces complex images with real and imaginary channels. From these real and imaginary channels one gets the magnitude and the phase channels. Whenever there’s a degradation in SNR it affects both the real and imaginary channels. Hence both the mangitude and phase will be affected (albeit in different ways). Under no circumstances will only the magnitude or phase alone will be affected. It is not a realistic scenario and hence we did not experiment with it.
We highlight that we provide experiments with various unimodal OOD image and text-based test sets for VQA in Appendix C. Across all these test sets, I2M2 consistently achieves a relative improvement ranging from two to four percent compared to the performance of inter-modality modeling.
---
Thank you again for your time and efforts in reviewing our paper, and we hope that you will consider raising your score if you find our response satisfactory.
Thank you,
Authors
---
Rebuttal Comment 1.1:
Comment: Thanks to the reviewers for their response. I have increased my score. | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper studies supervised learning for multi-modal data. Previous works can be categorized into inter-modality learning and intro-modality learning. Inter-modality learning aims to learn multi-modal data jointly by techniques such as feature fusion. Intral-modality learning focuses on learning uni-modal data separately. This paper attempts to unify these two frameworks by building a probabilistic graphical model of the multi-modal data. The previous inter- and intra-modal learning frameworks can be regarded as special cases of this graphical model by assuming specific independence of data. The joint modeling framework is expected to automatically infer whether the inter- or intra-information is important for given tasks. Additionally, the resulting model is simple to implement, by combining the inter- and intra-learning models together.
Experiments are conducted on several tasks, such as auto-vision, healthcare, and vision-language. The results show that the proposed combination can improve performances of each single model, intra-modality combinations and inter-modality feature fusions.
Strengths: 1. The motivation of the joint modeling is convincing. The resulting model is simple to use by combining different models.
2. The paper discusses an interesting unification of the two paradigms in multi-modal learning.
3. Experiments are conducted on different domains such as audio, image and language.
Weaknesses: 1. For the proposed model, are the inter-modal classifier and intra-modal classifiers trained from scratch? Can we simply use pre-trained ones? If the model should be re-trained or fine-tuned, it might be computationally expensive.
2. As the paper says, the proposed model is similar to a mixture of experts. It should be helpful to discuss more about this point and previous approaches of mixing experts in the related work part. The discussion could help to highlight the difference and contribution.
3. In experiments, the details of intra-modality seem missing. Do the authors choose competitive baselines for this class of methods? Moreover, for inter-modal models, Late fusion seems to be a simple baseline. LRTF was proposed in 2018. My concern is whether the results hold for more recent and competitive models.
Technical Quality: 2
Clarity: 3
Questions for Authors: Can the author explain more about the derivation of Eq. (3)? How to replace $p(y \mid x)$ by $q(x \mid y)$? The former is probability of $y$ while the latter is probability of $x$.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes, the authors discuss about limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for your review and your thoughtful comments. We appreciate you finding our paper well-motivated and simple to use with thorough experiments across different domains. We clarify your concerns below.
> For the proposed model, are the inter-modal classifier and intra-modal classifiers trained from scratch? Can we simply use pre-trained ones? If the model should be re-trained or fine-tuned, it might be computationally expensive.
As discussed in the limitations section and compared in Table 5, our approach benefits from pre-training the inter-modal and intra-modal classifiers. While fine-tuning these models jointly further improves performance, the computational expense is minimal, since the models converge within a few epochs.
---
> The proposed model is similar to a mixture of experts. It should be helpful to discuss more about this point and previous approaches of mixing experts in the related work part. The discussion could help to highlight the difference and contribution.
The proposed model is a product-of-experts [Hinton et al. 2013] and NOT a mixture-of-experts. We provide an empirical comparison showing benefits over the mixture of experts model in Figure 3. It is important to note that Mixture of Experts cannot capture all the interactions between modalities and the label. For example, consider the XOR problem with two variables, simply combining the outputs of two advanced neural networks—a method akin to a mixture of experts—cannot solve this task. We will further expand on the previous approaches in the related work section and make this distinction clearer.
---
> In experiments, the details of intra-modality seem missing. Do the authors choose competitive baselines for this class of methods? Moreover, for inter-modal models, Late fusion seems to be a simple baseline. LRTF was proposed in 2018. My concern is whether the results hold for more recent and competitive models. My concern is whether the results hold for more recent and competitive models.
We provide details for the intra-modality model on lines 187-189. Essentially, it is a product of experts model similar to I2M2, but without the inter-modality expert, making it a competitive baseline for comparison.
For inter-modal models, we were careful in picking the most recent and the most competitive models as baselines for all our experiments. This is highlighted in the experimental setup for each dataset. Specifically, for AV-MNIST and MIMIC-III, we selected LRTF because it was the top-performing model from the MultiBench benchmark [1]. For FastMRI, we referenced the work by Madaan et al., [2] the only study we identified that focuses on diagnosis for this task. Additionally, for VQA and NLVR2, we used the FIBER model, which achieved state-of-the-art performance on both datasets, as demonstrated in the recent studies [3, 4].
---
> Can the author explain more about the derivation of Eq. (3)? How to replace 𝑝(𝑦∣𝑥) by 𝑞(𝑥∣𝑦)? The former is probability of 𝑦 while the latter is probability of 𝑥.
As explained in Section 3.1, our model generates a probabilistic score using four components derived from the joint distribution in Equation 1. In Equation 2, we multiply these components, with the denominator serving as a normalizing constant that does not depend on the label $ \mathbf{y} $.
The notation $ q_{\mathbf{x}}(\mathbf{y} \mid \mathbf{x}) $ is used to denote the function that maps $ (\mathbf{y}, \mathbf{x}) $ to a positive scalar. This function is proportional to $ p(\mathbf{x} \mid \mathbf{y}) $ because $ p(\mathbf{y} \mid \mathbf{x}) = \frac{p(\mathbf{x} \mid \mathbf{y}) p(\mathbf{y})}{p(\mathbf{x})} $. Thus, by defining $ q_{\mathbf{x}}(\mathbf{y} \mid \mathbf{x}) $ as proportional to $ p(\mathbf{x} \mid \mathbf{y}) $, we have simply renamed $ p(\mathbf{y} \mid \mathbf{x}) $ for convenience. Specifically, $ q_{\mathbf{x}}(\mathbf{y} \mid \mathbf{x}) = c \cdot p(\mathbf{x} \mid \mathbf{y}) $, where $ c $ is a constant that normalizes the function representing the interaction between $ \mathbf{y} $ and $ \mathbf{x} $.
---
Thank you again for your time and efforts in reviewing our paper, and we hope that you will consider raising your score if you find our response satisfactory.
Thank you,
Authors
---
References.
[1] Liang et al., 21 MultiBench: Multiscale Benchmarks for Multimodal Representation Learning
[2] Madaan et al., 23 On Sensitivity and Robustness of Normalization Schemes to Input Distribution Shifts in Automatic MR Image Diagnosis.
[3] Dou et al., 22 Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone.
[4] Makino et al., 23 https://openreview.net/forum?id=QoRo9QmOAr.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' responses. I will keep the score. | null | null | null | null | null | null |
FreeLong: Training-Free Long Video Generation with SpectralBlend Temporal Attention | Accept (poster) | Summary: The paper introduces a SpectralBlend Temporal Attention (SB-TA) mechanism, which blends low-frequency and high-frequency components from attention features together to enhance consistency and realism in generating long videos. The authors tested the proposed algorithm using 25 text prompts on LaVie and VideoCrafter, demonstrating improvements in VBench metrics. Additionally, they conducted ablation studies to showcase the characteristics of different modules in the proposed method.
Strengths: 1. The proposed method achieved notable improvements, and the authors conducted interesting analyses to support their findings.
2. The proposed method supports multi-prompt long video generation.
Weaknesses: 1. My first concern is that 128 frames itself cannot be considered long video generation. Therefore, it would be more convincing to see how effective the authors' proposed method is on longer videos, such as generating 1-minute videos like Sora.
2. The proposed method includes several hyperparameters, such as alpha and tau. Are these parameters significant in influencing the final results? Because the authors tested only 25 prompts, it's unclear whether these hyperparameters remain suitable across a broader range of scenarios.
3. Although adding local features to long-range features can mitigate some loss of spatial details, for longer video generation with significant content variation, it remains uncertain whether these local features adequately supplement spatial details.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Figure 2's right panel demonstrates strong diagonal correlations in VideoCrafter at 64-frame and 128-frame intervals. How does this correlate with the findings proposed by the authors?
2. Since the focus of this paper is to test the algorithm's ability in long video generation, while VBench primarily benchmarks the quality of short video generation, the key question is how to demonstrate that the results obtained reflect the algorithm's performance in long video generation quality.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The method proposed by the authors can only generate videos up to 128 frames, which falls short of true long video generation. It has yet to be validated in generating genuinely long videos.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **My first concern is that 128 frames itself cannot be considered long video generation. Therefore, it would be more convincing to see how effective the authors' proposed method is on longer videos, such as generating 1-minute videos like Sora.**
>
Thank you for your feedback regarding the length of the generated videos. Following other works such as [1,2,3], which consider videos longer than 10 seconds as "long," we chose 128 frames to speed up our experiments. We understand that 128 frames may not fully represent long video generation compared to examples like Sora. It is important to note that our FreeLong method is not limited to this length. To address your concern, we have conducted further experiments evaluating FreeLong on generating longer videos of 256, 384, and 512 frames (approximately 1 minute at 8 fps). These results, shown in Figure 1 of the submitted one-page PDF, indicate that FreeLong achieves robust longer video generation across these lengths.
[1] StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text
[2] A Survey on Long Video Generation: Challenges, Methods, and Prospects
[3] FreeNoise: Tuning-Free Longer Video Diffusion via Noise Rescheduling
> **The proposed method includes several hyperparameters, such as alpha and tau. Are these parameters significant in influencing the final results?**
>
We appreciate your question regarding the significance of hyperparameters such as $\alpha$ and $\tau$. We set $\alpha$ to match the pre-trained video length. To explore the impact of $\tau$, we conducted quantitative ablation studies, as shown in the table below:
| $\tau$ | sub consistency | back consistency | motion_smooth | imaging_quality |
| --- | --- | --- | --- | --- |
| 10 | 0.9236 | 0.9323 | 0.9685 | 0.6830 |
| 20 | 0.9350 | 0.9358 | 0.9709 | 0.6893 |
| 30 | 0.9462 | 0.9578 | 0.9758 | 0.6894 |
| 40 | 0.9667 | 0.9660 | 0.9754 | 0.6913 |
| 50 | 0.9601 | 0.9645 | 0.9736 | 0.6941 |
Bigger $\tau$ means more steps to blend global and local features, the table shows that our method keeps enhancing the consistency when the steps increase and saturated at step 40.
> **Because the authors tested only 25 prompts, it's unclear whether these hyperparameters remain suitable across a broader range of scenarios.**
>
Regarding the number of test prompts, as described in Lines 178-179, we randomly selected 25 prompts from each of the 8 test categories in VBench, resulting in a total of 200 prompts to ensure a fair comparison. This approach ensures a fair and comprehensive comparison across all categories. All test prompts are illustrated in the supplementary materials.
> **Although adding local features to long-range features can mitigate some loss of spatial details, for longer video generation with significant content variation, it remains uncertain whether these local features adequately supplement spatial details.**
>
Thank you for pointing out the concern regarding the adequacy of spatial detail supplementation. We use the MUSIQ model, which evaluates over-exposure, noise, and blur of image, to validate image quality of the generated longer videos. The results are as follows:
| Method | 16 frame | Direct | Sliding Window | FreeNoise | Ours |
| --- | --- | --- | --- | --- | --- |
| Imaging Quality | 0.6890 | 0.6298 | 0.6591 | 0.6645 | 0.6913 |
Our method achieves comparable image quality to 16-frame videos, and significantly outperforms other methods. More qualitative examples on longer videos and additional base models are provided in Figure 1,2 in the one-page pdf.
> **Figure 2's right panel demonstrates strong diagonal correlations in VideoCrafter at 64-frame and 128-frame intervals. How does this correlate with the findings proposed by the authors?**
>
Thank you for your valuable question. VideoCrafter2 is pre-trained on varied video lengths and is relatively robust to longer video generation than LaVie. To better understand the temporal relationships in temporal attention, we set the diagonal elements, such as (0,0), (1,1) to 0, which makes visualization of relationships between different frames more effective. As shown in Figure 4 of one-page pdf, the attention map reveals less structured attention patterns. This approach helps explain the long-range temporal attention of VideoCrafter2.
> **how to demonstrate that the results obtained reflect the algorithm's performance in long video generation quality.**
>
Thank you for your valuable suggestions. VBench is a comprehensive evaluation benchmark for video diffusion models, with evaluation tools not constrained by video length, such as CLIP similarity, DINO similarity, and Temporal Flickering. Since no long video benchmark was available before the paper submission, we used this benchmark. Per your suggestion, we compared our method on the VBench-Long [4], designed for Sora-like video evaluation, as shown in the table below. It is evident that our method achieves consistent performance improvement compared to other methods.
| Method | sub consistency | back consistency | motion_smooth | imaging_quality |
| --- | --- | --- | --- | --- |
| Direct | 0.8687 | 0.9151 | 0.9229 | 0.6298 |
| Sliding Window | 0.8811 | 0.9089 | 0.9574 | 0.6591 |
| FreeNoise | 0.9397 | 0.9511 | 0.9696 | 0.6645 |
| FreeLong(Ours) | 0.9667 | 0.9660 | 0.9754 | 0.6913 |
[4] https://github.com/Vchitect/VBench/tree/master/vbench2_beta_long
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' response. The rebuttal addresses most of my concerns. I would like to raise my score.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you for your thoughtful review and for recognizing our efforts to address your concerns. We sincerely appreciate your consideration and insights. Thank you once again! | Summary: This paper proposes a training-free method to generate 8x longer videos based on 16-frame pre-trained video diffusion models. It observes that extended temporal attention has a negative effect on high-frequency generation. Thus, it proposes an SB-TA module to fuse global low-frequency temporal features and local high-frequency temporal features.
Strengths: 1. Lines 42-43 & 107-121: The observation on SNR with spatial-temporal components is natural and reasonable.
2. Novelty: While frequency splitting and merging are common in low-level image processing tasks like super-resolution, this is the first solution for video temporal "super-resolution".
Weaknesses: 1. Lines 69-76: The related work section is not reader-friendly for those unfamiliar with the field. Simply listing papers across a wide range (from GAN to diffusion) is too vague to introduce the outline of the research field.
2. Lack of comparisons and baselines: The observation analysis and experiments only include two base models: LaVie and VideoCrafter. Note that both of these video diffusion models adopt relative positional encoding (PE) in their temporal attention blocks. Other models like AnimateDiff (absolute PE), DIT-based models like Open-Sora (RoPE), and ModelScope (which adds temporal convolution, no PE) are not discussed in the paper. This suggests that the proposed method may be limited to one specific PE.
3. The visual result quality is limited and does not include more motions, despite the frame rate being 8x (16 to 128). Although the authors show multi-prompt generation ability in Sec. 5.5, a single prompt should still be enough to cover a much longer motion. (e.g., Fig.7's first prompt, "running Iron Man," is a complex motion, and it should be generated correctly as a complete motion with 128 frames sampling. Can you show this case in the rebuttal?)
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. In Lines 33-38 and Fig. 1, the authors claim that longer video generation on short-clip pre-trained models will degrade high-frequency details. However, the results of FreeLong still do not have rich details (especially in the background).
2. Did you consider the memory cost of long-video generation with one forward pass (128 frames)?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We have carefully considered your comments and suggestions to improve our paper. Below are our responses to each of your concerns:
> **The related work section is not reader-friendly for those unfamiliar with the field**
>
We apologize for the lack of clarity in the related work section. We understand that it may be challenging for readers unfamiliar with the field. To address this, we will revise this section to provide a more comprehensive overview of the research landscape. We will include a structured summary that highlights the key developments and contributions in video diffusion models, making it more accessible and informative for all readers.
> **Lack of comparisons and baselines**
>
We appreciate your insightful feedback regarding the need for more comparisons and baselines. Initially, we chose LaVie and VideoCrafter2 due to their state-of-the-art performance in video generation. However, we recognize the importance of demonstrating the generalizability of our proposed method, FreeLong, across different base models.
In response, we have extended our experiments to include additional models, such as OpenSORA, AnimateDiff, and ModelScope. Figure 2 in the one-page pdf presents these results, showing that FreeLong performs well across these diverse models. This broader evaluation demonstrates that FreeLong is not limited to a specific positional encoding and can be effectively applied to various video diffusion models.
> **The visual result quality is limited and does not include more motions**
>
We understand your concern about the visual result quality and the limited motion content in our generated videos. For single-prompt inputs, we focus on maintaining consistency a cross long sequences, while for multi-prompt inputs, we aim to facilitate more complex and accurate motions.
As illustrated in Figure 3 of the one-page pdf, our method achieves more natural variations compared to FreeNoise. FreeNoise often generates repetitive content due to repeated noise initialization to maintain temporal consistency. In contrast, FreeLong utilizes global temporal attention to explicitly maintain temporal consistency without sacrificing motion variety.
> **FreeLong still do not have rich details**
>
Thank you for pointing out the need for richer high-frequency details, particularly in the background. FreeLong aims to improve both consistency and fidelity in long video generation. To validate spatial details of generated videos, we used the MUSIQ model from VBench to evaluate image quality~(e.g. over-exposure, noise, blur). The results are shown in the table below:
| Method | 16 frame | Direct | Sliding Window | FreeNoise | Ours |
| --- | --- | --- | --- | --- | --- |
| Imaging Quality | 0.6890 | 0.6298 | 0.6591 | 0.6645 | 0.6913 |
Our method achieves comparable image quality to 16-frame videos, demonstrating improved spatial detail. More qualitative examples on longer videos and additional base models are provided in Figure 1,2 in the one-page pdf.
> **Memory Cost**
>
We have calculated the computational cost, including inference time and memory usage, as shown in the table below:
| Method | Inference Time | Memory Cost |
| --- | --- | --- |
| Direct | 1.8s | 18251MiB |
| Sliding Window | 2.6s | 15017MiB |
| FreeNoise | 2.6s | 15017MiB |
| FreeLong(Ours) | 2.2s | 20179MiB |
Compared to the direct application method, our FreeLong method slightly increases both memory cost and inference time to achieve consistent and high-fidelity long video generation. FreeLong can generate 128-frame videos using a single NVIDIA RTX 4090 GPU, ensuring feasibility for practical use.
When compared to FreeNoise, FreeLong reduces the inference time per step from 2.6s to 2.2s while increasing memory usage. This highlights FreeLong's efficiency in generating longer videos, as it effectively balances inference time with memory cost.
By optimizing these trade-offs, FreeLong demonstrates its capability to generate longer videos efficiently.
---
Rebuttal Comment 1.1:
Comment: Thank you for the comprehensive response from the authors, which has addressed most of my concerns; I believe the authors need to highlight the expansion on other models in subsequent versions; my only remaining concern is regarding the effectiveness, as I think the video results of long sequence generation shown in the PDF still do not fully demonstrate that FreeLong can effectively generate complete complex motions. I will raise the evaluation to 5.
---
Rebuttal 2:
Title: Thank you!
Comment: Thank you for your thorough review and constructive feedback. We greatly appreciate your thoughtful comments and are pleased that our response has addressed most of your concerns. We will highlight the expansion on other models in future versions. Regarding the demonstration of FreeLong's effectiveness in generating complex motions over longer sequences:
Due to the 50 MB limit of the one-page PDF, we could only provide a few images for each example. While these images demonstrate the generalizability of our approach, they may not fully capture the complexity of the motions FreeLong can produce. Videos offer more coherence and motion variation than images. We would like to highlight that, compared to previous methods like FreeNoise, FreeLong achieves consistent long video generation with more natural motions, as shown in Figure 3 of the one-page PDF.
In future versions, we will include more detailed examples to showcase FreeLong's capability to handle complex motions in longer sequences effectively.
Thank you once again for your valuable insights, which have been instrumental in enhancing our work. | Summary: The auther propose FreeLong, a training-free method for long video generation. This paper identifies that the problem with long video generation lies in the scope of attention and the distortion of high-frequency components. Based on the observation, the auther proposes a novel method to blend different frequency components of global and local video, leading to better result in long video generation.
Strengths: 1. The research problem is important. The paper is well-written and easy to follow.
2. The observation and the method are reasonable.
3. Experiments results look promising.
Weaknesses: My main concern lies in the computational cost.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1) Computing the global attention map and video features seems computationally expensive, what is the computational cost compared to previous method (like freenoise) ?
2) Method like freenoise suffer from repetitive generation. Generated contents will repeat several times. Can freelong solve this problem?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **what is the computational cost compared to previous method (like freenoise) ?**
>
Based on your feedback, we conducted a comparison of inference times between our method, FreeLong, and other methods, including FreeNoise, the sliding window method, and direct application. The results are summarized in the table below:
| Method | Inference Time |
| --- | --- |
| Direct | 1.8s |
| Sliding Window | 2.6s |
| FreeNoise | 2.6s |
| FreeLong(Ours) | 2.2s |
Our method, FreeLong, utilizes both global and local attention streams, performing a single-pass temporal attention forward operation. In contrast, previous methods like FreeNoise rely on sliding window temporal attentions, which require multiple rounds of temporal attention and thus increase inference time. Our results demonstrate that FreeLong achieves faster inference time.
> **Can freelong solve the repetitive generation problem?**
>
Regarding the repetitive generation problem observed in method FreeNoise, it is primarily due to their reliance on repetitive noise initialization. Our proposed method, FreeLong, overcomes this issue by explicitly capturing global consistency during the temporal attention process. This enables FreeLong to generate consistent longer videos without depending on repetitive initial noises.
By effectively capturing global consistency, FreeLong produces more diverse and coherent video sequences, which significantly reduces the repetitive generation problem. Figure 3 in the submitted one-page PDF provides a comparison of the results between FreeLong and FreeNoise, demonstrating our approach's effectiveness in reducing repetitive generation while maintaining high-quality video generation results.
---
Rebuttal Comment 1.1:
Comment: Thank authors for the response. You have addressed my concerns.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you for your thoughtful review and for recognizing our efforts to address your concerns. We are pleased to hear that you found our explanations on computational efficiency and reducing repetitive generation helpful. We sincerely appreciate your consideration and insights. Thank you once again! | Summary: The paper presents FreeLong, a novel training-free method for generating extended videos (128 frames) using pre-trained short video (16 frames) diffusion models. The key component is the SpectralBlend Temporal Attention (SB-TA), which fuses low-frequency global video features with high-frequency local features to ensure consistency and fidelity in long video generation. Experimental results highlight FreeLong's superior performance in video fidelity and temporal consistency over existing methods.
Strengths: 1. **Comprehensive Analysis**: The paper conducts an in-depth frequency analysis to identify challenges in long video generation and supports the proposed solution with extensive experimental validation.
2. **Novel SB-TA Mechanism**: The proposed SpectralBlend Temporal Attention effectively mitigates high-frequency component degradation, ensuring consistent and high-fidelity video outputs.
3. **Training-Free Long Video Generation**: Long video generation is an important task and future direction. I am happy to see a training-free method appear in the community. The proposed FreeLong is both resource-efficient and practical, allowing for the adaptation of existing short video models to long video generation without the need for retraining.
4. **Multi-Prompt Generation Support**: The multi-prompt video generation is coherent within FreeLong, and the visual continuity and smooth transitions between different scenes are satisfactory.
Weaknesses: 1. Inference time comparisons should include both single-pass and multi-pass temporal attention from previous methods to demonstrate the advantages clearly.
2. Although the video generation results are impressive, a more extensive user study is required to validate the consistency and quality of the generated videos.
Technical Quality: 3
Clarity: 3
Questions for Authors: Some concerns please refer to the Weaknesses.
Also what about the longer duration of video generation, for example, videos longer than 128 frames?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Including examples of failure cases would provide a better understanding of the method's limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Inference time comparisons**
>
Thank you for your question. Following your advice, we conducted a comparison of inference times. As shown in the table below, our method achieves faster inference speeds than multi-pass methods, such as FreeNoise.
| Method | Inference Time |
| --- | --- |
| Direct | 1.8s |
| Sliding Window | 2.6s |
| FreeNoise | 2.6s |
| FreeLong(Ours) | 2.2s |
> **User study**
>
Thank you for your valuable suggestions. We conducted a user study to evaluate temporal consistency, fidelity, and overall rankings. Ten participants from academia and industry took part in this study. The results are as follows:
| Method | Consistency $\uparrow$ | Fidelity $\uparrow$ | Overall rank $\downarrow$ |
| --- | --- | --- | --- |
| Direct | 2.47 | 1.85 | 3.96 |
| Sliding Window | 2.03 | 2.30 | 3.45 |
| FreeNoise | 2.36 | 2.81 | 1.48 |
| FreeLong(Ours) | 3.14 | 3.04 | 1.11 |
Our method, FreeLong, outperformed other methods across all evaluated criteria.
> **Longer Videos**
>
We have provided longer videos, including results with 256, 384, and 512 frames, in the one-page PDF. These results demonstrate the generalization ability of FreeLong to handle longer videos.
---
Rebuttal Comment 1.1:
Title: Good rebuttal
Comment: Thank authors for the thorough response and the additional experiments results. I am satisfied with the rebuttal and no longer have any further concerns. Therefore, I have decided to raise my score to accept.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: We greatly appreciate the reviewer's prompt response and thoughtful evaluation of our work! Your positive feedback and constructive comments are truly valuable in helping us refine and improve our project. We would like to express our heartfelt gratitude for your time and consideration. Thank you once again! | Rebuttal 1:
Rebuttal: We thank all reviewers for engaging in the review process. Our code will be made public upon acceptance.
We are deeply encouraged by positive comments from the reviewers. We appreciate the recognition and endorsement of our proposed training-free pipeline, such as acknowledging its analysis and method as reasonable (**838f,oAuU**), interesting (**QZ6p**), novel (**XZSx**), and effective (**838f,**). **838f**, **oAuU**, and **QZ6p** agree that our method generates videos that achieved notable improvements.
In our individual replies, we attempted to address specific questions and comments as clearly and in detail as possible.
Moreover, we added several additional results to the one-page PDF and the individual replies to strengthen our work overall. Here, we briefly summarize these additional experiments and evaluations:
- Longer videos from 256 to 512 frames
- Generalization to other base models with different position encoding, including ModelScope, Animatediff, and OpenSORA
- Detail comparison to FreeNoise
- More clear attention visualization on VideoCrafter2
We hope that these additional results further strengthen FreeLong position as the state-of-the-art method of training-free extending short video diffusion models to generate longer sequences and demonstrate:
- Our FreeLong can effectively generate longer video sequences not limited by fixed 128 frames.
- Our FreeLong is robust and is compatible to different video diffusion models and enhances the long context consistency and fidelity.
Pdf: /pdf/e542c14e82ac8d612b30397ad796835262e4b6da.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Why Transformers Need Adam: A Hessian Perspective | Accept (poster) | Summary: This paper explores the performance gap between SGD and Adam when training transformers. The authors analyze the eigenspectrum of the Hessian in different neural network architectures and attribute the failure of SGD to the different Hessian spectra in transformers compared to CNNs and MLPs. They empirically show that while the full Hessian spectrum is similar at initialization across architectures, transformers exhibit block heterogeneity, meaning different parameter blocks (e.g., key/query layer versus MLP layer) have distinct spectra. Conversely, other architectures such as CNNs and MLPs are block homogeneous. By artificially inducing block heterogeneity in MLPs, they demonstrate through numerical experiments that SGD's performance declines as the block spectrum variation increases, while Adam's performance remains stable. They attribute Adam's success in block heterogeneous cases to its ability to assign different learning rates to different parameters. The authors also propose using the Jensen-Shannon distance of Hessian blocks as a quantitive measure of the performance gap. They also provide a preliminary theoretical analysis comparing the convergence rates of SGD and Adam on a quadratic function with a block-form Hessian, highlighting Adam's dependence on the fine-grained condition number of individual Hessian blocks.
Strengths: This paper tackles a key question: understanding the optimization challenges of transformers with SGD and Adam is crucial for developing more efficient techniques for training large language models.
The paper is well-organized and easy to read. I find the ideas introduced throughout this work interesting and potentially useful to the community.
Weaknesses: Quantifying the performance gap between SGD and Adam using the average JS distance between block-wise Hessian spectra is an interesting idea. However, based on the runtime reported in the appendix, the Hessian approximation itself seems computationally expensive, which can raise concerns about how practical it is for large models, especially since the experiments in this paper are only on small models.
Additionally, the main theoretical analysis is conducted in a pretty simplified setup where the momentum part of Adam is fully ignored ($\beta_1=0$ in Algorithm 3).
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Can you comment on the computational complexity of calculating the JS distance and the Hessian approximation? Specifically, can you compare the runtime of computing this quantitative measure to the training time of the models in your paper? For significantly larger models, do you expect this approach to scale efficiently and be more practical than training the model?
2. Is the architecture the only factor affecting the Hessian structure? I am curious if the training distribution, particularly label imbalance, also impacts the Hessian structure, since in some previous works previous works, such as [46], the performance gap has been attributed to the heavy-tailed distribution of labels in language data.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the insightful question and careful evaluation of our paper. We provide response as follows.
**1. In theoretical analysis, the momentum part of Adam is fully ignored ( $\beta_1=0$ in Algorithm 3).**
Thanks for the comment. The current analysis ignores momentum because we want to focus on the benefit of assigning coordinate-wise lr, so we remove the effect of all other components to control variables. Nevertheless, we agree that choosing $\beta_1=0$ simplifies the problem. However, exploring the role of momentum is an independent and equally challenging problem. We consider it as an important future direction.
**2. On the computational complexity of calculating the JS distance and the Hessian approximation; compare runtime to the training time; scale efficiently to larger models**;
Thanks for the great question. Currently, our JS distance is indeed rather expensive to compute: it requires comparable to one training run. Fortunately, we find the current deployment of SQL is redundant for measuring hessian heterogeneity. We propose the following simple tweaks to significantly reduce the computation time, while still effectively detecting the Hessian heterogeneity. We call it **simplified SQL**:
1. Change the hyperparemters of SQL, including:
- 1-1: we change num_v = 10 to num_v = 1. In SQL, num_v decides the number of random Gaussian vectors to approximate the expected quadrature. It is reasonable to reduce num_v because in high dimensional space, random vectors tend to concentrate around their mean, so one random sample can already be informative enough.
- 1-2: we change the Lanzcos step m = 100 to m = 10. The reduction on Lanzcos step will have a coarse estimation on the middle eigenvalue, but won't affect much the heterogeneity, which is more dependent on the extreme eigenvalues
2. Randomly sample a subset of blocks and reduce batch size for estimating the spectrum. We uniformly sample 50% blocks and choose batch size = 32 (previously batch size = 1024).
We report the result and runtime in the Table below. As a result, the simplified SQL can obtain the same message as the original SQL: **JS0 of ResNet is about 100x smaller than that of BERT.** Further, the simplified SQL is highly efficient to compute. With this simplified SQL, we believe our method can efficiently scale to larger models.
| Model | JS0 | Time for JS0 | Time for Training | Time for JS0 / Time for training |
| :------: | :----------------: | :----------: | :---------------: | :------------------------------: |
| BERT | 98.83440094176784 | 20s | 4h | 0.001388889 |
| ResNet18 | 0.3568875006145019 | 65s | 87.5h | 0.000206349 |
Tested on: single V100
**3. Is the architecture the only factor affecting the Hessian structure? I am curious if the training distribution, particularly label imbalance, also impacts the Hessian structure**
We agree that there exist other factors (such as data distribution and label imbalance) that affect the Hessian structure. However, for ViT on ImageNet, we find SGD (even after careful tuning) is still largely worse than Adam. As such, we believe architecture plays a more crucial rule.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I think the results will interest the community. I'll keep my score as is. | Summary: This paper investigates why Adam outperforms SGD in Transformers by examining Hessian estimations. While the overall Hessian spectrum appears similar in both Transformers and other neural networks, the block-wise Hessian spectrum is heterogeneous across blocks in Transformers and homogeneous in other neural networks. The authors empirically confirm that this heterogeneity hampers SGD's performance through experiments involving Transformers, CNNs, MLPs, and quadratic problems. They attribute Adam's advantage to its coordinate-wise learning rate and provide initial theoretical analysis in a simplified setting.
Strengths: - The connection between Adam's performance and Hessian block-wise heterogeneity is novel, as are the related empirical observations.
- Related work is comprehensively cited in the Appendix.
- Although the finding itself is not groundbreaking, it suggests potential for future research in theoretical analysis and practical improvements of Adam, particularly regarding its advantage in adaptively updating different blocks.
Weaknesses: - The experiments support the claim that Adam outperforms SGD when Hessian block-wise heterogeneity structure is present. However, it is unclear whether this correlation implies causation beyond intuitive understanding. In general, the conceptual contribution is not very strong.
- The definition of "block" also seems arbitrarily determined. For example, Fig. 3 shows that all convolution layers have a similar spectrum, while the spectrum differs between Query, Value, and MLP blocks in Transformers. Given that one set is the same type of "block" (conv layer) and the other is not, this does not seem surprising. Since how the block is defined entirely determines Hessian heterogeneity versus homogeneity, this observation might be overfitting the transformer architecture and could be difficult to generalize to non-mainstream architectures as claimed.
- The case study on quadratics helps understand and attempt initial theoretical analysis, but it only partially represents behaviours in larger networks. The noise in SGD does not play a significant role here, and Adam with $\beta_2 = 0.99$ performs very differently than in larger networks.
Technical Quality: 2
Clarity: 3
Questions for Authors: - Including a more detailed experimental setting will help with reproducibility. The hyperparameters for experiments such as learning rate are not included. Since SGD is sensitive to hyperparameters, it's unclear whether the gap is due to the claimed reason or if hyperparams are insufficiently tuned. For example, the training curve of SGD in Figure 9(a) in ViT-ImageNet is much lower than Adam and that is not the case in [Figure 12, Kunstner et.al. Heavy-Tailed Class Imbalance and Why Adam Outperforms Gradient Descent on Language Models]. Can the author comment on this?
- Figure 1 shows the full spectrum across different time points to illustrate its evolution during training, yet all other plots in the paper related to the block-wise spectrum are presented at initialization. Is there a reason why the evolution of the block-wise spectrum is not shown?
- Are there any ablation studies on how well the Hessian spectrum is estimated through the SQL method, perhaps on reasonably small networks?
- Typo in Figure 7 y-axis label
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Limitations are well-addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the careful review of our work. Here is our respectful response.
**1. It is unclear whether this correlation implies causation beyond intuitive understanding.**
We provide the following evidence on causation.
**I: Theory:** On quadratic models, our theory suggests that: when heterogeneity happens, the complexity of Adam is better than GD.
**II: Control-variable experiments:**. We design the experiment as follows:
1) We take two Hessian matrices with an identical set of eigenvalues (cases 3 and 4 in Section 3.1). This ensures the performance of GD is fixed.
2) We re-arrange these eigenvalues in two different ways, which lead to heterogeneity and homogeneity (case 3 and 4).
3) We find Adam is faster when heterogeneity exists, resulting in a huge gap with GD
This experiment implies "heterogeneity" can cause the "gap between Adam and SGD."
**2. Block seems arbitrarily determined.**
We kindly point out that block is not arbitrarily determined. Here is our principle: we define "blocks" as the irreducible dense principal sub-blocks in Hessian.
Since it requires heavy calculation on Hessian to precisely find blocks by the principle above, we find a computationally-friendly way: We use the PyTorch partition to determine the blocks. We find that partition in PyTorch can usually match our principle. For instance, in Figure 2, each block corresponds to the parameters in Q, K, V; and MLP, respectively. Under this common partition strategy, we observe heterogeneity in Transformers but not in CNNs
**3. This observation might be overfitting the transformer architecture.**
We kindly point out that the observation is not overfitting to Transformers since we also observed heterogeneity on non-Transformer models such as MLP mixer (a real-world non-attention-based architecture, **Figure 5 in paper**), and SGD also performs worse than Adam here.
**4. The case study on quadratics helps understand and attempt initial theoretical analysis, but it only partially represents behaviors in larger networks.**
We agree that quadratic models cannot capture all behaviors of real NNs. To cover more behaviors in larger NN, we need analysis on more fine-grained settings such as noisy quadratic models or 1-layer attention models. It is an intriguing direction to pursue in the future.
**5. It's unclear whether the gap is due to the claimed reason or if hyperparams are insufficiently tuned.**
We believe the gap is due to Hessian heterogeneity, instead of improper hyperparam for SGD. For all Transformers, we grid search the learning rate of SGD and report the best performance. We grid-search lr of SGD as follows.
- For BERT: we use lr = 1e-4 for Adam. For SGD, we search lr over [1e-4, 1e-3, 1e-2, 1e-1]
- For ViT: we use lr = 1e-4 for Adam. For SGD, we search lr over [1e-4, 5e-4, 1e-3, 5e-3, 1e-2, 5e-2, 1e-1]
- For GPT2-nano: we use lr = 1e-2 for Adam, For SGD, we search lr over [1e-5, 5e-5, 1e-4, 5e-4, 1e-3, 5e-3, 1e-2, 5e-2, 1e-1, 5e-1, 1]
We visualize the performance of SGD under different lr. This can be seen in **Figure 1 in the attached PDF**. We find that SGD (even after careful tuning) is still worse than Adam, including ViT on ImageNet.
**Additional evidence for ViT:** We here provide more evidence that "SGD is worse than Adam on ViT" from the literature.
- On Cifar-10, [1] carefully tuned SGD on ViT and find that SGD cannot work (Figure 4). They reported that "the SGD algorithm did not perform well for this task” even after careful tuning: “ The optimizer hyperparam were set according to the values obtained during hyperparam tuning. ”
- On ImageNet, [2] provides empirical evidence that SGD is worse than Adam on vanilla ViT. The authors reported that " SGD yields significantly worse results than AdamW (on ViT)", "ViT often fails to converge with SGD " (their Figure 3).
As for Figure 12 in Kunstner et.al., there are two possible reasons for the mismatch: (1) they used SimpleViT, which is a simplified version of the vanilla ViT as we conducted. It is possible that SimpleViT exhibits less heterogeneity and is more friendly to SGD. (2) perhaps they did not tune Adam carefully enough and Adam can perform better if well-tuned.
[1] AdaGC: A Novel Adaptive Optimization Algorithm with Gradient Bias Correction
[2] Early convolutions help transformers see better
**6. Block-wise spectrum along training.**
Following your suggestions, we plot the block-wise spectrum at 25%, 50%, and 100% training steps for GPT2 and ViT. We find an interesting phenomenon: Hessian heterogeneity tends to reduce along training. This can be seen in **Figure 2 and 3 in the attached PDF.**
We also take the checkpoint of ViT at the 46th epoch (which is about 50% steps) and continue to train with SGD. We find SGD now can perform much better. We provide a similar explanation here: SGD now performs better because there is less Hessian heterogeneity in ViT at the 46-th epoch (50\% training steps). This can be seen in **Figure 2 in the attached PDF.**
This phenomenon above also provides one interesting practical implication: if we can find a good initialization with less heterogeneity, it is possible to train transformers with SGD. However, designing such initialization is non-trivial, as its explicit relation with heterogeneity is still unclear. We leave this topic as an intriguing future direction.
To sum up, we make two new findings when investigating the blockwise spectrum along training: 1. Heterogeneity tends to reduce along training. 2. as heterogeneity is reduced, we can switch Adam to SGD. We will include these results in the revised script.
**7. Ablation studies on how well the Hessian spectrum is estimated through the SQL method, perhaps on reasonably small networks?**
We provide an ablation study on a 4-layer Transformer. The results are shown in **Figure 4 in the attached PDF.** We find SQL provide accurate approximation over the true eigenvalue density.
---
Rebuttal 2:
Title: Response to authors
Comment: Thank you for addressing my concerns; I will raise my score accordingly. | Summary: The paper investigates why SGD performs worse than Adam in Transformers from the lens of Hessian. They first find that Transformers are "heterogeneous", that is, the Hessian spectrum across parameter blocks varies dramatically. Then they conduct various tasks on Transformers, CNNs, MLPs, and quadratic problems, to verify that block heterogeneity hampers SGD. Finally, they derive some initial theoretical analysis to indicate that SGD fails because it applies one single learning rate for all blocks, which cannot handle the heterogeneity among blocks.
Strengths: 1. The paper is well written, clear to read, and the story is interesting.
2. The discovery of block heterogeneous for the reason of the bad performance of SGD on Transformers is interesting and novel.
3. The experiments are completed to verify the points described in the paper.
Weaknesses: 1. The paper demonstrates that for different models when there the block heterogeneous at initialization increases, SGD becomes much worse than Adam. However, these comparisons are between different models. It is natural to ask for the same Transformers model, when the initialization changes to induce different block heterogenous levels, how would the performance of SGD compared with Adam change?
2. The theoretical results of the convergence of Adam for the quadratic models show that homogeneous spectra have worse complexity than heterogeneous spectra, is this the truth in practice?
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the careful review of our paper and the great questions. Please find our respectful reply below.
**1. For the same Transformers model, when the initialization changes to induce different block heterogenous levels, how would the performance of SGD compared with Adam change?**
Thanks for the interesting question. Here is our finding: For the same model, when heterogeneity is reduced, the gap between Adam and SGD is also reduced. We provide evidence as below.
There are typically two approaches to change initialization (please let us know if neither of these aligns with your thoughts).
- Approach 1: Change the variance of random initialization.
- Approach 2: Choose a checkpoint in the middle of training as the initialization.
Unfortunately, we find Approach 1 is not suitable for controlling the heterogeneity: it seems unclear how the variance of random initialization is explicitly related to the heterogeneity in Hessian. As such, we use Approach 2 to change the initialization and track the performance of SGD. We present two experiments below. (**Exp 1** is already presented in the paper; **Exp 2** is new.)
**Exp 1:** On finetuning tasks of GPT2, when we change the initialization to the pre-trained weights, GPT2 (pre-trained) exhibits much less hetereogeneity compared with GPT2 with random initialization. This can be seen in **Figure 4 (f) and Figure 6 (a) in the paper**. As a result, SGD can reach a similar loss as Adam here. This is shown in **Figure 6 (b)** in the paper.
**Exp 2:** We realize that **Exp 1** might not fully answer your question: although **Exp 1** does not change the architecture, it changes the training dataset (from pre-training dataset to SFT dataset), and thus changes the loss function. To rigorously address your concern, we further take the checkpoint of ViT at the 46-th epoch (which is about 50% training steps) and calculate Hessian heterogeneity. We make two findings: (1) For ViT checkpoint at the 46-th epoch, Hessian heterogeneity is largely reduced compared with that of random initialization. (2 ) If we continue to rain it with SGD, we find SGD now can perform much better than training from scratch. This result can be seen in **Figure 2 in the attached PDF**.
This phenomenon above also provides one interesting practical implication: if we can find a good initialization with less heterogeneity, it is possible to train transformers with SGD. Yet, as discussed above, designing such initialization is non-trivial, as its explicit relation with heterogeneity is still unclear. We leave this topic as an intriguing future direction.
**2. The theoretical results of the convergence of Adam for the quadratic models show that homogeneous spectra have worse complexity than heterogeneous spectra, is this the truth in practice?**
Thanks for the great question. Let us clarify: our theory does **not** imply that Adam has worse complexity on homogeneous spectra. For simplicity of discussion, let's state your comment as a conjecture.
**Conjecture 1:** Adam on quadratic problems with homogeneous spectra has worse complexity than Adam on heterogeneous spectra.
If we understand your question correctly (please tell us if not so), your question states that our theory implies Conjecture 1. We kindly point out that Conjucture 1 is **not correct**. We'll clarify below.
(1) **What we proved and why it does NOT imply Conjecture 1**.
**Our theoretical result:** Adam has complexity $O(\max_l \kappa_l)$. GD has complexity $\Omega(\kappa)$
If our result implies Conjecture 1, one needs the following argument: when changing heterogeneity to homogeneity, $\max_l \kappa_l$ increases, and thus Adam is slower.
However, "changing heterogeneity to homogeneity" does **not** necessarily mean "$\max_l \kappa_l$ increases". Actually, "$\max_l \kappa_l$" **can change in an arbitrary way** (can increase, decrease, or keep the same) when changing the heterogeneity. See detailed examples below.
(2) **Detailed Examples on Comparing Homo and Hetero Cases**.
We provide three examples below.
**Notation:** We use Adam (homo) to denote the rate of Adam on homogeneous Hessian. Similarly for Adam (hetero)
**Example 1: Adam (homo) is same as Adam (hetero)**
case 1-1: homo: eigenvalue {1,2}, {1,2}
case 1-2: hetero: eigenvalue {1,2}, {11,12}
Since $\max_l \kappa_l$ are the same for both case 1-1 and 1-2, Adam (homo) is same as Adam (hetero)
**Example 2: Adam (homo) is faster than Adam (hetero)**
case 2-1: homo: eigenvalue {1,1.5}, {1,1.5}
case 2-2: hetero: eigenvalue {1,2}, {11,12}
Since case 2-1 has smaller $\max_l \kappa_l$ than case 2-2, Adam (homo) is faster than Adam (hetero)
**Example 3: Adam (homo) is slower than Adam (hetero)**
case 3-1: homo: eigenvalue {1,11}, {2,12}
case 3-2: hetero: eigenvalue {1,2}, {11,12}
Since case 3-1 has larger $\max_l \kappa_l$ than case 3-2, Adam (homo) is slower than Adam (hetero)
To sum up, there is no conclusion on "whether Adam under homogeneity is faster or slower than Adam under heterogeneity ". Either case can happen.
(3) **possible source of confusion:** We thought (correct us if not so) the confusion may come from the numerical examples (Case 3 and 4 in Section 3.1) in the paper. If comparing two figures, Adam (homo) in Case 3 is slower than Adam (hetero) in Case 4. But as argued above, this is just one example, and it does NOT show Adam (homo) is ALWAYS slower than Adam (hetero).
As a result, all the above three examples can happen, and "Changing hetero to homo" does not necessarily mean "Adam becomes slower".
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' responses. The authors carefully address my questions. I thank the authors for pointing out that either case can happen for Adam under homogeneity and heterogeneity. The confusion is indeed caused by Case 3 and 4 in Section 3.1, where the diagonal elements are the same, but in different order. Maybe such reordering of the same elements has some correlation with the algorithm's complexity. Overall, I think this work is interesting and has potential for the community, thus I would like to update my score to 7. | Summary: In this work the authors assert that the Adam optimizer works well on Transformers while Stochastic Gradient Descent (SGD) does not, and they attempt to explain this phenomenon by inspecting the spectrum of Transformers (i.e., the eigenvalues of the model's Hessian matrix) and other models, such as convolutional networks, where SGD is competitive with Adam. They show that the spectrum of Transformers and convolutional networks are similar, providing little insight. The main innovation of the authors is to investigate the spectrum of individual components/layers of these networks - the so-called block spectrum - and the main contribution of the work is to show that the block spectrum is empirically correlated with the ineffectiveness of SGD relative to Adam.
Strengths: 1) The paper is generally clear and well-written - this is appreciated
2) The idea of inspecting the block-wise spectrum is potentially innovative
3) The empirical results presented by the authors are fairly convincing.
Weaknesses: 1) The premise of this paper is that SGD performs worse for attention-based models, and it is unclear to me if this is true. I say this for two reasons:
a) The authors cite [45,56] on Line 16 after making this claim, presumably to support the claim. However, these are quite old publications and neither of these papers specifically make this claim. Are there other publications that make these claims?
b) The authors also provide their own empirical results (e.g., Fig. 3) to support this claim, however, these results seem highly insufficient to me to support this claim. How did the authors choose/optimize the learning rate for SGD, and/or the learning rate schedule?
If SGD does not generally perform worse than attention models on Transformers, then it seems to me that the work has little scientific value, since the differences in the block spectrum are just inconsequential differences, or at least that their implications are different than suggested by the authors. e.g., perhaps the authors could argue that the heterogeneous block spectra implies that SGD needs a lot more tuning to work well, for example. I'd ask the authors to please address this point, and I will consider raising my rating.
2) Novelty of using block spectrum (potentially). To me this is the main value of this work. I'm not as familiar with this topic, and it is unclear to me how non-obvious the use of a block spectrum would be for analysis, or how surprising the results of using this approach are: perhaps these findings would be somewhat obvious to researchers familiar with this topic? I will defer to other reviewers for this assessment.
Technical Quality: 3
Clarity: 3
Questions for Authors: See "Weaknesses" for my questions.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the careful evaluation of our paper. We respectfully provide our response as follows.
**1-1. Are there other publications that make these claims (SGD worse than Adam on attention-based model)?**
Thanks for raising this comment. Yes, it is widely reported that SGD largely underperforms Adam on attention-based models, **even after careful tuning**. Here are some references:
1. [1] provides empirical evidence that SGD is worse than Adam on BERT, but not on SGD. On Bert, the authors comment that "Although hyperparameters for SGD are finetuned, a large performance gap is still observed" (their Figure 1)
2. [2] provides empirical evidence that SGD is worse than Adam on ViT. The authors reported that " SGD yields significantly worse results than AdamW (on ViT)", "ViT often fails to converge with SGD " ( their Figure 3)
3. [3] carefully tuned SGD on ViT and find that SGD cannot work (their Figure 4). They reported that " the SGD algorithm did not perform well for this task”" even after careful tuning: “ The optimizer hyperparameters were set according to the values obtained during hyperparameter tuning. ”
4. [4] carefully tuned SGD on modern GPT models and find that SGD cannot work (their Figure 1).
5. Besides, main-stream large language models use Adam, including GPT3 [5], Llama series [6], Gemini series [7], etc. This serves as additional evidence that SGD is worse than Adam for Transformer training.
**1-2. How did the authors choose/optimize the learning rate for SGD, and/or the learning rate schedule?**
For the learning rate schedule, we use the same default cosine decay with warmup for both SGD and Adam. As for the learning rate, we grid search the learning rate of SGD and report the best performance. We grid-search the learning rate as follows.
- For BERT: we use lr = 1e-4 for Adam. For SGD, we grid search lr over [1e-4, 1e-3, 1e-2, 1e-1]
- For ViT: we use lr = 1e-4 for Adam. For SGD, we grid search lr over [1e-4, 5e-4, 1e-3, 5e-3, 1e-2, 5e-2, 1e-1]
- For GPT2-nano: we use lr = 1e-2 for Adam, For SGD, we grid search lr over [1e-5, 5e-5, 1e-4, 5e-4, 1e-3, 5e-3, 1e-2, 5e-2, 1e-1, 5e-1, 1]
We also visualize the performance of SGD under different learning rates. This can be seen in **Figure 1 in the attached PDF**. For all these Transformers we investigate, we find SGD (even after fine-tuning carefully) performs significantly worse than Adam.
**2. Novelty of using block spectrum (potentially).**
Thanks for the question. We believe our idea is non-trivial and non-obvious. The novelty of our perspective is also confirmed by other three reviewers, for instance:
- Reviewer PTmy: "The discovery of block heterogeneous for the reason of the bad performance of SGD on Transformers is interesting and novel."
- Review tJ3K: "The connection between Adam's performance and Hessian block-wise heterogeneity is novel, as are the related empirical observations."
- Reviewer A8Vc: " I find the ideas introduced throughout this work interesting and potentially useful to the community."
**Our comment on why blockwise spectrum is non-trivial.** For completeness, we further comment on why the perspective of the blockwise spectrum is new and non-trivial.
**--The natural idea does not work.** To explore "why Transformers need Adam", a natural idea is to investigate the full Hessian eigenvalue (spectrum). This is because: by optimization theory, full Hessian eigenvalues largely decide the behavior of gradient methods, and there is an active line of work trying to understand NN training via full Hessian spectrum (e.g., reference [8] here, also see the reference [12,32,39,67,68,69,75,76,97,98,103] in our paper ). Unfortunately, we find there is no noticeable difference between the full spectrum of CNNs and that of Transformers (Figure 1 in the paper). As such, the natural idea of full spectrum does not work.
**--Why our perspective is non-trivial.** Our major conceptual innovation is that we connect the following findings to "why Transformers need Adam".
1. Transformers contains various kinds of building blocks, while CNN consists of similar convolutional layers. Based on this, we conjecture that blockwise differences might be crucial.
2. "Hessian of NNs have near-block-diagonal structure" (a non-trivial but highly overlooked finding in this field). Based on this, we realize that the blockwise spectrum has rich information and could be used to quantify the blockwise difference. Note that for models with dense Hessian, blockwise spectrum can still be computed, but loses substantial information.
As such, our blockwise spectrum perspective is novel to the deep learning community because it is based on multiple non-trivial findings.
We believe this perspective is also new to optimization community: for optimization community, it is very rare to analyze (near-) block-diagonal Hessian structure since typical problems do not have such structure. For instance, in the classical non-linear programming dataset [9], all problems have non-block-diagonal Hessian. We point out a new perspective to characterize modern optimization problems.
As such, we believe our perspective is new, non-trivial, and potentially useful to a wide range of audiences.
[1] Why are adaptive methods good for attention models?
[2] Early convolutions help transformers see better.
[3] AdaGC: A Novel Adaptive Optimization Algorithm with Gradient Bias Correction
[4] Deconstructing What Makes a Good Optimizer for Language Models
[5] Language models are few-shot learners
[6] Llama 2: Open foundation and fine-tuned chat models
[7] Gemini: A natural language system for spoken-language understanding
[8] An investigation into neural net optimization via hessian eigenvalue density
[9] Nonlinear Programming Solvers for Unconstrained and Constrained Optimization Problems: a Benchmark Analysis | Rebuttal 1:
Rebuttal: Dear reviewers and AC:
We attached a PDF with the following four figures. Please check.
**Figure 1:** On ViT, BERT, and GPT2-nano, we carefully grid search the learning rate for SGD and report all the results. We find that on all these tasks, SGD (even after careful tuning) is significantly worse than Adam.
**Figure 2 and 3:** For ViT and GPT2, we plot the evolution of heterogeneity of Hessian along training. We find that heterogeneity is reduced along with training. Further, when switching Adam to SGD in the middle of training where the heterogeneity is reduced (e.g., the 46th epoch of ViT training), SGD can perform better.
**Figure 4:** We conduct ablation study on small GPT to show that our SQL method can accurately calculate the Hessian spectrum.
Pdf: /pdf/813cb70f4187bc614b3508733067d251437e29e9.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Certified Robustness for Deep Equilibrium Models via Serialized Random Smoothing | Accept (poster) | Summary: This work first studies Randomized Smoothing in Deep Equilibrium Models. To combat the prohibitive computational cost, this work designs a procedure called SRS to speed up certification, mainly relying on the fast convergence of multiple predictions with DEM. The certification theorem is revised accordingly to adapt to the new procedure. Experimental results prove that the new algorithm achieves stronger or comparable performance over the standard RS, with a major speedup.
Strengths: This paper first studies RS in DEM, and thus presents a solid step. I appreciate the authors' considering the speedup of the standard approach and the performance of their approach. The theoretical analysis seems correct, despite minor issues.
Weaknesses: I don't find major drawbacks in the approach and the evaluations. Below are some minor concerns:
1. In the evaluation, the authors only show the discretized certified radius curve, while the average certified radius (ACR) is the standard metric for evaluating RS. I suggest the authors to report ACR as well.
2. The authors indicate that the models are trained via Gaussian, while there are many recent advances in training RS models, e.g., SmoothAdv [1] and CATRS [2]. I advise the authors to include the performance of SRS under at least one of these more advanced training tricks, if applicable, to show the universality of their approach w.r.t. the model.
3. Line 190 says that the bounding subset is selected during the SRS sampling, this means the hypothesis test here is not independent of the SRS; thus in the proof, the multiplication of events could fail. The authors should rethink and potentially fix their proof accordingly.
4. SRS basically reduces the complexity of fixed point iteration, by making the prediction serial. However, this is not ideal for parallelization. In Fig 1.b, SRS sequentially uses the result from the last prediction as the initialization; while these factual more precise fixed points could be beneficial, to facilitate parallelization, I suspect that even initializing all others with the result of the first prediction is sufficient, as they follow the same distribution. Could the authors elaborate more on this?
[1] Salman et al., Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers.
[2] Jeong et al., Confidence-aware Training of Smoothed Classifiers for Certified Robustness.
Technical Quality: 4
Clarity: 3
Questions for Authors: See weakness. Below are some additional questions:
Minor:
Line 98 & 158, Cohen et al did not discuss fixed point solvers and DEQs. Is that a wrong citation?
Line 454 & 455, `for $\|\delta\|<R$` is duplicated. Further, I advise the authors to explain why they specifically chose $N^E_A$ in their algorithm: the proof does not seem to motivate this choice. Regarding $p_m$, should it be the probability that the prediction is correct under SRS but not normal RS?
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors do not seem to include a limitation section. They point to the conclusion section in the checklist, which I don't find valid.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer, thank you so much for your valuable comments and recognition of the novelty, effectiveness, and presentation of our work. We are happy to address your concerns and questions with the following results and illustrations.
**Weakness 1**: In the evaluation, the authors only show the discretized certified radius curve...
**Answer**: Thank you for your suggestion. We produce the results with the ACR metric for CIFAR-10 following the definition in [1]. Here we report them in Table 1. We will add them to the main paper in a revised version.
| Model | ACR |\| Model | ACR |
|----------------------|------|----------------------|------|
| MDEQ-LARGE-1A | 0.27 |\| MDEQ-SMALL-1A | 0.23 |
| MDEQ-LARGE-30A | 0.62 |\| MDEQ-SMALL-30A | 0.59 |
| SRS-MDEQ-LARGE-1A | 0.59 | \|SRS-MDEQ-SMALL-1A | 0.56 |
| SRS-MDEQ-LARGE-3A | 0.62 |\| SRS-MDEQ-SMALL-3A | 0.59 |
Table 1: Average certified radius for the MDEQ architecture with $\sigma=0.5$ on CIFAR-10.
[1] Chen, Ruoxin, et al. "Input-specific robustness certification for randomized smoothing." AAAI 2022.
**Weakness 2**: The authors indicate that the models are trained via Gaussian, while there are many recent advances...
**Answer**: Thank you for your suggestion. Here we produce the results with SmoothAdv to show the effectiveness of our method, and we will add them to the main paper in a revised version. For SmoothAdv, we choose PGD [1] as the adversarial attack and the number of adversarial examples in the training is set as 4. The results are shown in Table 2. With SmoothAdv, the certified accuracy will increase for both the standard randomized smoothing and our SRS.
| Model \ Radius | 0.0 | 0.25 | 0.5 | 0.75 | 1.0 | 1.25 | 1.5 |
|-------------------------|-----|------|-----|------|-----|------|-----|
| MDEQ-30A (adv) | **62%** | **54%** | 43% | **37%** | **30%** | **23%** | 14% |
| MDEQ-30A (standard) | 62% | 50% | 38% | 30% | 22% | 13% | 9% |
| SRS-MDEQ-1A (adv) | 60% | 43% | 35% | 27% | 18% | 14% | 9% |
| SRS-MDEQ-3A (adv) | 60% | 52% | **43%** | 36% | 29% | 22% | **14%** |
Table 2: Certified accuracy for the MDEQ-SMALL architecture with $\sigma=0.5$ on CIFAR-10 using SmoothAdv.
[1] Kurakin, Alexey, Ian Goodfellow, and Samy Bengio. "Adversarial machine learning at scale." arXiv:1611.01236 (2016).
**Weakness 3**: Line 190 says that the bounding subset is selected during the SRS sampling, this means ...
**Answer**: Thanks for pointing out our problem. Because the two hypothesis tests are both based on noisy samples, they could be dependent. In this case, we slightly revise our proof as follows: denote the event that the radius of SRS is smaller than the radius of RS as $A$ and the event that the radius of RS can certify the data points $B$. We can conclude that $\mathbb{P}(\bar{A})=\mathbb{P}(\bar{B})=\tilde{\alpha}$ following the hypothesis tests. The final probability of successfully certifying the data point is:
$$
\mathbb{P}(A\cup B) = \mathbb{P}(B)-\mathbb{P}(\bar{A}\cup B) = 1-\mathbb{P}(\bar{B})-\mathbb{P}(\bar{A}\cup B) \geq 1-\mathbb{P}(\bar{B})-\mathbb{P}(\bar{A}) = 1-2\tilde{\alpha}
$$
where $\mathbb{P}(\bar{A})$ is the probability that $A$ does not happen. By setting $\tilde{\alpha}=\alpha/2$, we complete the proof. By carefully reevaluating the differences between the previous $\tilde{\alpha}=1-\sqrt{1-\alpha}$ and the current one in our proof, the reported accuracy does not change because $\alpha$ is very small (e.g., $\alpha=0.01$). We will revise our proof and the corresponding algorithm in a new version of our paper.
**Weakness 4**: SRS basically reduces the complexity of fixed point iteration, by making the prediction serial...
**Answer**: Because of the page limitation, the corresponding analysis is in Appendix K (start points). For convenience, we briefly restate our conclusion of the analysis. The results with the last fixed points are better than those with the first prediction when the number of the fixed-point iterations is small as shown in Table 2. With the previous fixed point, our certification process accumulates randomness to avoid the bad guess at the beginning so we get better results. We will discuss the question more in the revised version.
| Model \ Radius | 0.0 | 0.25 | 0.5 | 0.75 | 1.0 | 1.25 | 1.5 |
|---------------------|------|------|------|------|------|------|------|
| SRS-MDEQ-1A-clean | 56% | 48% | 40% | 29% | 20% | 16% | 12% |
| SRS-MDEQ-3A-clean | 64% | 52% | 45% | 33% | 23% | 15% | 11% |
| SRS-MDEQ-1A | 63% | 53% | 45% | 32% | 22% | 16% | 12% |
| SRS-MDEQ-3A | 66% | 54% | 45% | 33% | 23% | 16% | 11% |
Table 3: Certified accuracy for the MDEQ-LARGE architecture with $\sigma=0.5$ on CIFAR-10. The first two rows represent the results starting from clean data.
**Minor Question:** Line 98 \& 158, Cohen et al did not discuss fixed point solvers and DEQs. Is that a wrong citation?...
**Answer**: Thanks for pointing out the wrong citation, we are supposed to cite the paper about the solver [1]. Besides, we will remove the duplicated words in the text.
As you understand, $p_m$ is the probability that the prediction is correct under SRS but not normal RS. Therefore $N_E^A$ is the number of effective predictions in SRS, which is a conservative estimation of randomized smoothing. We will correct all of those mistakes and clarify the notations in a revised version of our paper.
[1] Bai, Shaojie, Vladlen Koltun, and J. Zico Kolter. "Neural deep equilibrium solvers." ICLR 2021.
**Updating Limitations**: Thanks for pointing out the lack of limitations. We will add the following text in a revised version: ''Though our paper speeds up the certification of DEQs with randomized smoothing, it cannot be directly applied to other architecture. We regard the speedup for the general method as our future research.''
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttal from authors. This clears my concerns. I will raise my score to accept.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you very much for reconsidering our submission. We will promptly incorporate the related works you mentioned.
If you have any further concerns, please don't hesitate to let us know. | Summary: Due to the inability of existing deterministic methods for providing certified robustness to Deep Equilibrium Models (DEQs) to be applied to large-scale datasets and network structures, this paper provides scalable certified robustness for DEQs through the probabilistic method of random smoothing. To avoid the high computational costs associated with directly using existing random smoothing methods, this paper implements Serialized Randomized Smoothing using historical information from previous inputs to reduce computational overhead. The paper also provides theoretical correctness proofs for this method. Experiments show that significant computational performance improvements can be achieved without sacrificing certified accuracy.
Strengths: Originality. Through an analysis of the computational efficiency of directly applying random smoothing methods to obtain certified robustness for DEQs, it was concluded that the Monte Carlo estimation in the random smoothing method and the fixed-point solver in DEQs are the computational efficiency bottlenecks. This paper proposes a serialized smoothing method, which improves the computational efficiency of the certified robustness method while maintaining certified accuracy. Compared to directly applying random smoothing for certified robustness, this represents a significant improvement in computational performance.
Quality. The paper concisely utilizes the historical feature representation information from other noisy samples, eliminating the substantial redundant computations caused by Monte Carlo estimation in random smoothing. This enhances the practicality of the proposed certified robustness method. Furthermore, by introducing a new certified radius estimation method, they ensure the correctness of this concise algorithm.
Significance. This paper effectively addresses the practicality issues caused by the computational overhead when applying the seemingly universal random smoothing method to specific deep learning models. It provides a concise processing solution tailored to the computational characteristics of specific network models. The authors also conduct a theoretical analysis of the method's correctness resulting from its application. This work offers an inspiring research approach for the research community to improve the computational efficiency of random smoothing in future studies.
Weaknesses: In the process of correlation-eliminated certification, this paper requires the use of standard DEQs to drop unreliable predictions, which necessitates additional memory overhead for standard DEQs during actual deployment.
Technical Quality: 3
Clarity: 3
Questions for Authors: In this paper, the authors have been exploring the introduction of a new serialized random smoothing certification method for robustness to avoid the expensive computational costs of previous methods while ensuring that certified accuracy is not affected. I would like to know if the serialized smoothing certification method for robustness has any impact on the clear accuracy metric for DEQs compared to the previous traditional random smoothing certification methods, and if so, how significant is this impact?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer, thank you so much for your valuable comments and recognition of the novelty, effectiveness, and presentation of our work. We are happy to address your concerns and questions.
**Weakness 1**: In the process of correlation-eliminated certification, this paper requires the use of standard DEQs to drop unreliable predictions, which necessitates additional memory overhead for standard DEQs during actual deployment.
**Answer**: We think there are some slight misunderstandings of the DEQs. The memory cost of the standard DEQs is independent of the number of the fixed-point iterations. Our experiments show that both of them use about 1.5 GB of memory with a batch size of 400. The standard DEQs have the same memory overhead as our method because we only change the number of fixed-point iterations but keep the same solvers. Therefore, we only need more time to run the standard randomized smoothing. Thanks for your question and we will clarify it clearly in the revision of our paper.
**Question 1**: In this paper, the authors have been exploring the introduction of a new serialized random smoothing certification method for robustness to avoid the expensive computational costs of previous methods while ensuring that certified accuracy is not affected. I would like to know if the serialized smoothing certification method for robustness has any impact on the clear accuracy metric for DEQs compared to the previous traditional random smoothing certification methods, and if so, how significant is this impact?
**Answer**: We assume you are referring to the ''clean'' accuracy of our proposed method instead of the ''clear'' accuracy. In Table 1-3 of the main paper, we show the certified accuracy under different radii. The clean accuracy is a special case where the radius is 0. In this case, we do not require the smoothed classifier to be robust but to have the correct predictions. For convenience, we copy part of the results to show it. Our method will sacrifice little clean accuracy (only 1\%) compared to the standard randomized smoothing as shown in Table 1.
Hope our answer can address your question, and we are glad to reply if you need more description.
| Model | Clean Accuracy |
|-------------|----------------|
| MDEQ-30A | 67% |
| SRS-MDEQ-3A | 66% |
Table 1: Clean accuracy for the MDEQ-LARGE on CIFAR-10.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. This addresses my concerns, so I’ll be raising my score to accept.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you very much for your positive feedback and reconsidering our submission.
If you have any further concerns, please don't hesitate to let us know. | Summary: This paper provides a method for randomized smoothing for DEQs. Given the computational challenges associated with DEQs, the authors propose a method that is intended to speed up the process of creating a smooth classifier for a DEQ model based on fixed-point reuse. Given that fixed-point reuse introduces dependency between the predictions for the models f, the authors propose a method called Correlation-Eliminated Certification in which the goal is to remedy the aforementioned issue with fixed-point reuse. This method is centered on approximating the probability with which SRS misclassifies a point as the most probable class, and uses the original DEQ to achieve this. Then the authors provide a high probability certification result in Theorem 3.1.
Strengths: * The paper adapts the certified robustness method of randomized smoothing to deep equilibrium models while keeping in mind the nuances of a DEQ. In doing so, they come up with a novel and unique method for certifying a DEQ. The paper also provides careful analysis of their method, and shows that their method is both efficient and admits high certified accuracy.
* The authors provide a thorough ablations section which answers questions regarding the goodness of their approximation for $p_m$.
* The authors provide a theoretical guarantee for the correctness of their method, and provide extensive experimental results.
Weaknesses: * It would be nice to provide more careful definitions. One definition that should be in the paper is that of $\ell_2$-norm certified radius, as is defined for example in [1]. A pointer to a paper which introduces this is also missing. Generally speaking, although [1] also uses similar notation to this paper, it is a bit confusing to write $c_A$ without writing it as a function of $x$. My understanding is that $c_A$ is the most likely class given some sample $x$, so as a reader this is a bit confusing to not write it as a function of $x$. Another definition which seems to be crucial in your paper is LowerConfBound (e.g. Equation (12)) and hence it would be nice to provide the definition rather than to point the reader to [1]. Furthermore, there are less crucial definitions such as $K$, in Equation (10), which are not defined (but I assume represents the number of samples from SRS, $K < N$). Furthermore, can you please make the notation consistent between Section 3.3 and Appendix A which contains the proof for Theorem 3.1? Specifically there is lower-case $y$ in Appendix A and the subscript notation for the labels is also different between the two sections.
* Using a model with Jacobian regularization seems as though it might be more suitable for a certification method. It would be nice to provide more justification as regards to this choice, and how the method would perform on a MDEQ trained without Jacobian regularization.
[1] Cohen, J., Rosenfeld, E., and Kolter, Z. Certified adversarial robustness via randomized smoothing. In international conference on machine learning, pp. 1310–1320. PMLR, 2019
Some notes:
* Lines 49-54 introduce acronyms that are not defined
* Neyman-Pearson is spelled incorrectly on line 108
* Figure 5 caption: converted -> converts. The sentence starting with “For instance” does not read well. Overall, could you write your caption a bit more clearly?
* Line 574: missing period
* Algorithm 1: line 13 Should say Predict Y_g? It seems as though some indexing is also missing. Also, it is not clear to me what the relationship between the total number of samples is as the number of samples that is used for $p_m$ in Algorithm 1.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the effect of using a model trained with Jacobian regularization on your method? Do you think this is crucial for certifying a DEQ?
2. What is the relationship between the number of samples that are checked with the DEQ in Line 13 of Algorithm 1, and the total number of samples obtained via SRS?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer, thank you so much for your valuable comments and recognition of the novelty, effectiveness, and presentation of our work. We are happy to address your concerns and questions.
**Question 1**: What is the effect of using a model trained with Jacobian regularization on your method? Do you think this is crucial for certifying a DEQ?
**Answer**: Jacobian regularization stabilizes the training of the backbones but it is not crucial for the certification. To answer your question, we conduct the experiments without the Jacobian regularization as shown in Table 1. The results show that using Jacobian regularization can help stabilize the fixed-point solvers but will almost not affect the final performance with enough fixed-point iterations. Generally speaking, the more stable the model is, the more efficient our approach is (i.e., we can use fewer steps of iterations). Our conclusion is consistent with [1] where the regularization does not increase the accuracy but decreases the number of fixed-point iterations.
The experiments show that using Jacobian regularization is not crucial in the certification. However, we recommend to use the regularization in the training for more stable performance. We hope this answer addresses your question, and we are happy to provide further details if needed.
| Model \ Radius | 0.0 | 0.25 | 0.5 | 0.75 | 1.0 | 1.25 | 1.5 |
|------------------------|-----|------|-----|------|-----|------|-----|
| MDEQ-30A (w/o Jacobian) | 63% | 51% | 38% | 29% | 19% | 13% | 7% |
| SRS-MDEQ-3A (w/o Jacobian) | 60% | 49% | 38% | 28% | 18% | 12% | 6% |
| MDEQ-30A (w Jacobian) | 62% | 50% | 38% | 30% | 22% | 13% | 9% |
| SRS-MDEQ-3A (w Jacobian) | 60% | 50% | 38% | 29% | 21% | 12% | 8% |
Table 1: Ablation study of Jacobian regularization for the MDEQ-SMALL architecture with $\sigma=0.5$ on CIFAR-10.
[1] Bai, Shaojie, Vladlen Koltun, and Zico Kolter. "Stabilizing Equilibrium Models by Jacobian Regularization." International Conference on Machine Learning. PMLR, 2021.
**Question 2**: What is the relationship between the number of samples that are checked with the DEQ in Line 13 of Algorithm 1, and the total number of samples obtained via SRS?
**Answer**: Thank you for pointing out our unclear expressions and overlooking the equation changes with the notations in Algorithm 1. Generally speaking, Line 13 of Algorithm 1 returns the standard DEQs predictions, which are used to compare with the SRS predictions. The comparison tells us what $p_m$ should be. After dropping the unreliable predictions with $p_m$, we will use the total number of samples $N$ to predict the certified radius.
To be specific, Line 14 should be expanded into two steps. First, we use equation (12) to compute the estimated $\overline{p_m}$, namely:
$$
N_1 = \sum\nolimits_{i=1}^{K}\mathbf{1} \\{Y_m = Y_g \text{ and } Y_g = c_A(x) \\} ,
$$
$$
N_2 = \sum\nolimits_{i=1}^{K}\mathbf{1}\\{Y_m = c_A(x)\\},
$$
$$
\overline{p_m} = 1-\text{LowerConfBound}(N_1, N_2, 1-\tilde{\alpha}),
$$
Then we use equation (9) to estimate the effective samples that are predicted as class $c_A (x)$ with our estimated $\overline{p_m}$:
$$
N_A^E = N_A - \overline{p_m} N_A,
$$
Finally, with the total number of samples $N$, we compute the radius with equations (13) and (14):
$$
\underline{p_A} = \text{LowerConfBound}(N_A^E, N, 1-\tilde{\alpha})
$$
$$
R = \sigma\Phi^{-1}(\underline{p_A}),
$$
We hope this answer addresses your question, and we are happy to provide further details if needed.
**Weakness 1**: Revising the typos and unclear expressions
**Answer**: Thanks for pointing out the inconsistency of the notations and the unclear expressions and we are glad to correct them in a revised version. First, we will add the definition of $\ell_2$ norm and claim $c_A$ as the function of $\mathbf{x}$, namely $c_A (\mathbf{x})$. Secondly, we will correct the typos and the inconsistency notation in the proof. Thirdly, we will add the definition of acronyms for Interval Bound Propagation (IBP) and Lipschitz Bounded Equilibrium Networks (LBEN).
Besides, we will revise the caption of Fig.5 as: ``For instance, the predictions of $\mathbf{x}+\epsilon_2$ are different with RS and SRS. Therefore, the prediction of $\mathbf{x}+\epsilon_2$ will not be counted as the most probable class $\hat{c}_A(\mathbf{x})$.''
For the definition of $K$, it is a hyperparameter that we add descriptions but forget to mark it. To be specific, we are supposed to define it from line 189 to line 191: ``During the Monte Carlo sampling of SRS, we randomly select $K$ of samples (a small number compared to $N$) along with their corresponding predictions''.
Finally, we will add the definition of LowerConfBound as follows: LowerConfBound$(k, n, 1-\alpha)$ returns a one-sided $(1-\alpha)$ lower confidence interval for the Binomial parameter $p$ given that $k\sim\text{Binomial}(n, p)$. In other words, it returns some number $\underline{p}$ for which $\underline{p} \leq p$ with probability at least $1-\alpha$ over the sampling of $k\sim\text{Binomial}(n, p)$.
**Weakness 2**: Using a model with Jacobian regularization seems as though it might be more suitable for a certification method. It would be nice to provide more justification as regards to this choice, and how the method would perform on a MDEQ trained without Jacobian regularization.}
**Answer**: To solve your concern, we refer you to the results we provided in Question 1.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my questions and for addressing my concerns. I maintain my recommendation to accept the paper.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you very much for your positive feedback. We appreciate your thoughtful review and are glad that our work met your expectations. If you have any additional comments or suggestions, we would be happy to address them. | Summary: This paper develops the first randomized smoothing certified defense for DEQs, termed as Serialized Random Smoothing (SRS). To address the scalability issue of randomized smoothing, To reduce computational redundancy, SRS leverages historical information and a new certified radius estimation. The proposed method can cover various DEQ structures, significantly expanding the scope of existing work. Extensive experiments and ablation studies on large-scale tasks such as ImageNet have been presented to demonstrate the proposed method. Overall, this paper presents a significant contribution to the field of certified robustness for DEQs. The SRS method offers a promising approach to make certification more practical for these models, especially on larger datasets.
Strengths: 1. This paper addresses an important gap in the literature by obtaining non-trivial certified robustness of DEQs across various datasets and network structures.
2. Both the empirical and theoretical developments are solid. The proposed method significantly reduces computation time, making certification feasible for larger models and datasets. The authors have included extensive evaluations on different datasets, model sizes, and hyperparameters.
3. The paper is well written and easy to follow.
Weaknesses: 1. The conceptual novelty may not be that significant since randomized smoothing is a well-known technique in the first place.
2. By the end of the paper, it is unclear whether the results in this paper have achieved SOTA certified robustness among all networks. I mean, does the certified robust accuracy of DEQ reach to the level of existing results on feed-forward networks using standard RS?
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Can the authors clarify the difficulty and novelty of implementing their method for large-scale datasets like ImageNet?
2. Does the certified robust accuracy of DEQ reach to the level of existing results on feed-forward networks using standard RS?
3. Can the authors further justify the unique novelty and technicality of their contribution?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The authors have partially addressed limitations in their paper. However, there seems to lack a substantial discussion on broader societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer, thank you so much for your valuable comments and recognition of the topic, effectiveness, and presentation of our work. We are happy to address your concerns and questions with the following results and illustrations.
**Question 1**: Can the authors clarify the difficulty and novelty of implementing their method for large-scale datasets like ImageNet?
***Answer**: For large-scale datasets like ImageNet, the standard randomized smoothing can be too expensive to apply on DEQs because (1) DEQs can be slow on such a high-resolution dataset and (2) need a second-order fixed-point solver to guarantee convergence. With other methods, such as IBP and LBEN, it is hard to compute a non-trivial certified radius because the certification is deterministic [1]. In contrast, our method accelerates the standard randomized smoothing by reusing the historical fixed points and guarantees the theoretical correctness of the certification by the two-stage hypothesis testing.
We hope this answer addresses your question, and we are happy to provide further details if needed.
[1] Li, Linyi, Tao Xie, and Bo Li. "Sok: Certified robustness for deep neural networks." 2023 IEEE symposium on security and privacy (SP). IEEE, 2023.
**Question 2**: Does the certified robust accuracy of DEQ reach to the level of existing results on feed-forward networks using standard RS?
**Answer**: Thanks for your question. To address your concern, we are glad to provide a comparison between the explicit models and DEQs. Despite surpassing the performance of explicit neural networks is not our target, we claim the performance over DEQs can catch up with them, as shown in Table 1. We provide a comparison between DEQs and ResNet-110 under the same training and evaluation setting, and the results are consistent with those reported in [1]. We will add the results to the main paper in a revised version.
| Model\Radius | 0.0 | 0.25 | 0.5 | 0.75 | 1.0 | 1.25 | 1.5 |
|--------------|-----|------|-----|------|-----|------|-----|
| ResNet-110 | 65% | 54% | 41% | 32% | 23% | 15% | 9% |
| MDEQ-30A | **67%** | **55%** | 45% | 33% | 23% | 16% | **12%** |
| SRS-MDEQ-3A | 66% | 54% | **45%** | **33%** | **23%** | **16%** | 11% |
Table 1: Certified accuracy for the MDEQ-LARGE architecture with $\sigma=0.5$ on CIFAR-10. The best certified accuracy for each radius is in bold.
[1] Cohen, Jeremy, Elan Rosenfeld, and Zico Kolter. "Certified adversarial robustness via randomized smoothing." international conference on machine learning. PMLR, 2019.
**Question 3**: Can the authors further justify the unique novelty and technicality of their contribution?
**Answer**: The major contribution of this paper is to explore randomized smoothing certification for implicit models for the first time. Specifically, we discover that existing randomized smoothing techniques are not suitable for certifying implicit models such as DEQs because of the dominating computation cost. Therefore, we propose serialized randomized smoothing that leverages the historical fixed points to effectively reduce the computation redundancy and accelerate randomized smoothing significantly. However, the serialized operation brings correlations between the predictions, breaking the correctness of the existing theorem of randomized smoothing. To solve the challenge, we then propose a two-stage certification technique (hypothesis testing) to provide correct certification. The new theorem and empirical studies verify that our algorithm works as expected.
**Weakness 1**: The conceptual novelty may not be that significant since randomized smoothing is a well-known technique in the first place.
**Answer**: Though randomized smoothing is a well-known technique, it cannot be easily applied to DEQs for computation reasons as we illustrate in Question 1. Therefore, we propose our novel serialized randomized smoothing which is much more efficient and with a new theorem as illustrated in Question 3. In this way, our method demonstrates novelty. We hope this answer addresses your question, and we are happy to provide further details if needed.
**Weakness 2**: By the end of the paper, it is unclear whether the results in this paper have achieved SOTA certified robustness among all networks. I mean, does the certified robust accuracy of DEQ reach to the level of existing results on feed-forward networks using standard RS?
**Answer**: To answer your concern, we refer to the comparison provided in Question 2.
**Updating Limitations**: Thanks for pointing out the lack of a substantial discussion on societal impacts. We will add the following discussion in the revised version: ''Our work significantly improves the security of artificial intelligence, especially applicable in sensitive domains. Our proposed SRS can provide significant acceleration of defending the attacks for AI models, enhancing the appliance of the models and maintaining the integrity of AI-driven decisions.''
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thanks for addressing my comments. I have raised my score to 7.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you very much for reconsidering our submission.
If you have any further concerns, please don't hesitate to let us know. | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper introduces a novel method based on random smoothing to improve the certified robustness of Deep Equilibrium Models (DEQs), a promising type of implicit neural network.
Directly applying random smoothing to DEQs incurs high computational costs. To overcome this issue, the authors leverage the properties of DEQs and design a serialized random smoothing (SRS) strategy. This method uses the output from one noisy input as the starting point for the next, significantly accelerating convergence.
Subsequently, to eliminate the dependence between noisy samples created by SRS, the authors introduce Correlation-Eliminated Certification, a new method to obtain certified radius estimation.
Experimental results on CIFAR-10 and ImageNet demonstrate the efficiency of the proposed model.
Strengths: - The paper is well-written, making it easy to follow the key techniques. The experiments and ablation studies are informative.
- The proposed method presents an inspiring use of the representational properties of DEQs. Specifically, DEQs characterize their output as the fixed point of a function (conditioned on their input). Therefore, when running inference on a group of similar samples, DEQs are essentially solving a group of similar fixed-point equations; their inference can be accelerated by reusing the fixed-point between samples.
- The designed certified radius estimation algorithm, Correlation-Eliminated Certification, is straightforward but effective. With a two-stage hypothesis test, this method eliminates the dependency (a side effect of SMS).
Weaknesses: The proposed method is only applicable to DEQs. Although DEQs are a promising architecture, they have not been as widely applied as explicit neural networks or proven to scale well, limiting the scope of this paper. Moreover, the paper does not include a performance comparison of the proposed against other non-DEQ models.
Technical Quality: 3
Clarity: 4
Questions for Authors: - On line 52, "Due to the conservative certification, IBP and LBEN cannot be generalized to large-scale datasets (e.g., ImageNet)." There seems to be a logical gap in this sentence. Could the authors elaborate more on this point? Is there any reference that can back up this claim?
- Is the certified robustness of the proposed model (SRS-DEQ) competitive against the certified robustness of other non-DEQ architectures with random smoothing? For example, the paper "CERTIFIED ROBUSTNESS FOR DEEP EQUILIBRIUM MODELS VIA INTERVAL BOUND PROPAGATION" compares their models with other explicit neural networks. Such comparisons are important for the audience to position the proposed model in a larger context.
- Are there any insights or techniques from this paper that can be applied to improve other performance aspects of DEQs (besides certified robustness) or improve non-DEQ models?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: na
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer, thank you so much for your valuable comments and recognition of the novelty, effectiveness, and presentation of our work. We are happy to address your concerns and questions with the following results and illustrations.
**Question 1**: On line 52, "Due to the conservative certification, IBP and LBEN cannot be generalized to large-scale datasets (e.g., ImageNet)." There seems to be a logical gap in this sentence. Could the authors elaborate more on this point? Is there any reference that can back up this claim?
**Answer**: Thanks for your question. The sentence here tries to convey that the deterministic methods will generate a trivial certified radius (namely, close to 0) because their estimation is not tight enough in some cases such as deep networks [1]. When it comes to large-scale datasets, we refer to high-resolution and many-class cases, such as ImageNet. It increases the difficulty of the task and usually requires more complex models, deteriorating the trivial certified radius problem. In Section E of [2] (the first paragraph and the practical implications part), the authors make similar claims. We will describe the logic and cite the above reference to support our claim in a revised version.
[1] Zhang, Bohang, et al. "Towards certifying l-infinity robustness using neural networks with l-inf-dist neurons." International Conference on Machine Learning. PMLR, 2021.
[2] Li, Linyi, Tao Xie, and Bo Li. "Sok: Certified robustness for deep neural networks." 2023 IEEE symposium on security and privacy (SP). IEEE, 2023.
**Question 2**: Is the certified robustness of the proposed model (SRS-DEQ) competitive against the certified robustness of other non-DEQ architectures with random smoothing? For example, the paper "CERTIFIED ROBUSTNESS FOR DEEP EQUILIBRIUM MODELS VIA INTERVAL BOUND PROPAGATION" compares their models with other explicit neural networks. Such comparisons are important for the audience to position the proposed model in a larger context.
**Answer**: We appreciate that you recognize the value of DEQs, which present advantages compared to explicit neural networks in some cases, such as memory efficiency in the training and accuracy-speed trade-off during inference [1]. According to your suggestion, we are glad to provide a comparison between the explicit models and DEQs. Despite surpassing the performance of explicit neural networks is not our target, we claim the performance of DEQs can catch up with them, as shown in Table 1. We provide the comparison between DEQs and ResNet-110 under the same training and evaluation setting, and the results are consistent with those reported in [2]. We will add the results to the main paper in a revised version.
| Model\Radius | 0.0 | 0.25 | 0.5 | 0.75 | 1.0 | 1.25 | 1.5 |
|--------------|-----|------|-----|------|-----|------|-----|
| ResNet-110 | 65% | 54% | 41% | 32% | 23% | 15% | 9% |
| MDEQ-30A | **67%** | **55%** | 45% | 33% | 23% | 16% | **12%** |
| SRS-MDEQ-3A | 66% | 54% | **45%** | **33%** | **23%** | **16%** | 11% |
Table 1: Certified accuracy for the MDEQ-LARGE architecture with $\sigma=0.5$ on CIFAR-10. The best certified accuracy for each radius is in bold.
[1] Bai, Shaojie, J. Zico Kolter, and Vladlen Koltun. "Deep equilibrium models." Advances in neural information processing systems 32 (2019).
[2] Cohen, Jeremy, Elan Rosenfeld, and Zico Kolter. "Certified adversarial robustness via randomized smoothing." international conference on machine learning. PMLR, 2019.
**Question 3**: Are there any insights or techniques from this paper that can be applied to improve other performance aspects of DEQs (besides certified robustness) or improve non-DEQ models?
**Answer**: Yes. The insights and techniques to exploit computation redundancy can be applied to improve other performance aspects. For instance, the uncertainty estimation with Bayesian inference requires multiple samples of the model parameters, resulting in inference with similar model parameters (instead of similar data input in this paper). When applying the Bayesian neural networks to DEQs, our techniques can improve the efficiency of the uncertainty estimation. However, we must admit that our current fixed-point reusing technique may be not directly applied to non-DEQ models, and new techniques to exploit computation redundancy will need to be developed. This is one of our future work.
**Weakness 1**: The proposed method is only applicable to DEQs. Although DEQs are a promising architecture, they have not been as widely applied as explicit neural networks or proven to scale well, limiting the scope of this paper. Moreover, the paper does not include a performance comparison of the proposed against other non-DEQ models.
**Answer**: We appreciate that you recognize the value of DEQs again. The corresponding results are provided in the reply to Question 2.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing all my questions with experiments and clarification. I have raised my score accordingly.
I found your response to Question 3 especially inspiring. I think that adding a paragraph in the related work section discussing the “DEQ fixed-point reuse” technique would broaden the appeal of this paper, and effectively position the paper's core technique within its context. For example, besides the Bayesian inference example that you gave, [1] exploited fixed-point reuse in the diffusion process, and [2] applied the technique to optical flow estimation.
[1] Bai, Xingjian, and Luke Melas-Kyriazi. "Fixed Point Diffusion Models." CVPR, 2024.
[2] Bai, Shaojie, Zhengyang Geng, Yash Savani, and J. Zico Kolter. "Deep Equilibrium Optical Flow Estimation." CVPR, 2024.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you very much for reconsidering our submission. We will promptly incorporate the related works you mentioned.
If you have any further concerns, please don't hesitate to let us know. | null | null | null | null | null | null |
Provable Acceleration of Nesterov's Accelerated Gradient for Asymmetric Matrix Factorization and Linear Neural Networks | Accept (poster) | Summary: In this study, the authors established the convergence rate of the gradient descent and Nesterov's accelerated gradient descent methods for the asymmetric matrix factorization and the 2-layer linear network. The authors proved that an unbalanced initialization can lead to linear convergence of both methods, and the Nesterov's acceleration result in a faster convergence rate. Numerical experiments are implemented to support the theory.
Strengths: This paper is clearly written and easy to follow in general. The results are novel and inspiring to audiences in the non-convex optimization field. I did not check the proofs in the appendix due to time limit, but the part in the main manuscript looks correct to me.
Weaknesses: I do not see major weaknesses. The authors could consider discussing more related literature and include more intermediate steps in the main manuscript. Also, I think the authors can consider illustrating the effect of an unbalanced initialization in numerical experiments.
Technical Quality: 3
Clarity: 3
Questions for Authors: - I think the following two papers also discussed the asymmetric low-rank matrix optimization problem. It would be better if the authors could discuss them and the references therein:
[1] Zhang, H., Bi, Y., & Lavaei, J. (2021). General low-rank matrix optimization: Geometric analysis and sharper bounds. Advances in Neural Information Processing Systems, 34, 27369-27380.
[2] Bi, Y., Zhang, H., & Lavaei, J. (2022, June). Local and global linear convergence of general low-rank matrix recovery problems. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 36, No. 9, pp. 10129-10137).
- Following the above comment, I wonder if the authors can briefly discuss the potential and challenges of extending the results to more general low-rank matrix optimization problems.
- Lines 79-80: I am not sure why the method in [Stöger and Soltanolkotabi, 2021] is considered a preconditioned method.
- Theorem 1: In my understanding, the value $\|R_t\|_F$ should be proportional to the scale of $A$. However, in the bound of $\|R_t\|_F$, the right hand-side grows with the size of $A$. Maybe there is a typo?
- The above comment also applies to Theorem 2.
- It would be better to mention in Theorems 1-2 that the bound $T$ is derived using the bound in Proposition 1?
- Line 182: I think the shrinkage rate $\theta \in (0, \rho]$?
- I wonder if the authors can compare the performance of GD/NAG with different values of c?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See my comments in the Questions section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your positive feedback and constructive comments. Our responses to each of the comments are listed below.
> Q1: I think the following two papers also discussed the asymmetric low-rank matrix optimization problem.
We thank the reviewer for pointing out these related works, and we will cite [1] and [2] and discuss them in the next version.
> Q2: I wonder if the authors can briefly discuss the potential and challenges of extending the results to more general low-rank matrix optimization problems.
Generalizing the results to the general low-rank matrix optimization problem $\min_{X,Y}f(XY^\top)$ is challenging. One of the challenges is the possible non-linearity of the gradient w.r.t. $X$. In our proof, a key step is to show the residual and the error terms in all iterations are in the contraction subspace of the dynamics, which requires the gradient of $f$ to preserve the column space of $X$. Unfortunately, such a property does not hold for general loss functions.
Nevertheless, generalization to neural networks with non-linear activations, $\min_{X,Y}|L-X\sigma(Y^\top D)|_F^2$, will not have this issue, and we believe it is an interesting topic that is worth future investigation.
> Q3: Lines 79-80: I am not sure why the method in [Stöger and Soltanolkotabi, 2021] is considered a preconditioned method.
Thank you for pointing out this misplacement. The method in [Stöger and Soltanolkotabi, 2021] is not a preconditioned one. We cite it to support the former claim " Overparameterization may heavily slow down convergence", as it shows $O(\kappa^2\log\frac{1}{\epsilon})$ for exact parameterization case and $O(\kappa^6\log\frac{\kappa}{\epsilon})$ for overparameterization case.
To avoid confusion, we will move it before the comma in the next version.
> Q4 & 5: Theorem 1 & 2: In my understanding, the value $|R_t|_F$ should be proportional to the scale of $A$. However, in the bound of $|R_t|_F$, the right hand-side grows with the size of $A$. Maybe there is a typo?
There is no typo here. The bound on $|R_t|_F$ implicitly depends on $|A|_F$ through the choice of $c$. The Theorems require $c^2\geq\underline{c}^2=O(|A|_F)$, and the RHS grows with $c^2$, so $|R_t|_F$ implicitly depends on $|A|_F$. When $c$ is larger than $\underline{c}$, $|A|_F$ is dominated by $c^2$ and our results still hold. When $c$ is too small, our proof does not work.
Moreover, the RHS does not explicitly depend on the size of $A$. The size might affect the bound through $|A|_F$ and $\sigma_i(A)$, but there is no explicit dependence on the dimensions $m$ and $n$. The bound only depends on the rank $r$ and the overparameterization $d$, both are not directly related to the size of A.
> Q6: It would be better to mention in Theorems 1-2 that the bound $T$ is derived using the bound in Proposition 1
Thank you for the suggestion. We will mention the use of quantitative results in Proposition 1 in the next version.
> Q7: Line 182: I think the shrinkage rate $\theta\in(0,\rho]$?
It is not a typo, the error shrinkage rate $\theta$ is larger than the "ideal shrinkage rate" $\rho$ given by the condition number of the linear part of the dynamics. Intuitively, the existence of nonlinear error terms will slow down convergence, so $|R_t|$ will shrink at a rate $\theta$ slower than $\rho$, namely $\theta>\rho$. Then by Lemma 3, all error terms can be controlled by sequences with shrinkage rate $\theta$. The auxiliary Lemma 6 provided in Appendix B.1 then guarantees that the final residual $|R_t|_F$ is controlled by $O(\theta^t)$.
> Q8: I wonder if the authors can compare the performance of GD/NAG with different values of c.
We add additional experiments on GD and NAG with different values of $c$. The results are provided in Figure 5 in the PDF file in "Author Rebuttal". As illustrated, when $c$ is sufficiently large, increasing $c$ further has little effect on the convergence rate, which is consistent with our theory.
[1] Zhang, H., Bi, Y., \& Lavaei, J. (2021). General low-rank matrix optimization: Geometric analysis and sharper bounds. Advances in Neural Information Processing Systems, 34, 27369-27380.
[2] Bi, Y., Zhang, H., \& Lavaei, J. (2022, June). Local and global linear convergence of general low-rank matrix recovery problems. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 36, No. 9, pp. 10129-10137).
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you for your time in reviewing our manuscript and offering valuable comments and suggestions. We've addressed the questions raised in your review with our author rebuttals. We'd like to know if these responses have sufficiently addressed your concerns. If any points require further clarification or discussion, please let us know.
Thank you again for your effort.
Best wishes,
Authors | Summary: This paper considers the convergence of the first-order optimization method, which includes the gradient descent and the nesterov's acccelerated gradient
for the marix factorization and the linear neural network.
In Section 2.1, they analyze the gradient descent algorithm on the matrix factorization (c.f. Thm. 1).
In Section 2.2, they analyze the NAG (c.f. Thm 2). Section 3 illustrates the proof strategy.
Section 4 extends the analysis to the linear neural network (c.f. Thm. 3) and Section 5 presents the numerical experiments.
Strengths: This paper is generally well-written and is quite easy to follow. Also, the analysis seems to be valid.
Weaknesses: The major concern is the potential impact of this paper. Generally speaking, this paper studies a well-studied problem with quite standard technique.
Also, there are some small typos that are quite annoying. For example, the $\|A\|_F$ in Thm. 1and Thm. 2 can cancel out. Thm 2 (line 14), the GD should be NAG. Please do another round of proof-reading.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Why the convergence rate is independent of $\|A\|_F$?
2. Can you explain the line 807? Why the eigenvalues of $\lambda_i(T_{GD})$ ($1\leq i \leq (m-r)n$ can be ignored? The previous line suggest these eigenvalues are one.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time in reviewing the paper and helping us improve.
Our responses to each point of the Weaknesses and Questions are listed below.
> W1: The major concern is the potential impact of this paper. Generally speaking, this paper studies a well-studied problem with quite standard technique.
Impact is a little subjective, but we hope the reviewer could consider the following regarding whether this is a well-studied problem and whether our technique is standard:
Quantitative theoretical guarantee for the optimization of general nonconvex function without Lipschitz gradient, even after decades of remarkable progress, is still largely an open problem. We managed to do it for a specific type of problem, namely matrix factorization. It is still a nonconvex problem without Lipschitz gradient. Moreover, it carries a lot of insight for a better understanding of deep learning, as it is closely related to linear neural networks, which we also analyzed. Therefore, it is extensively studied, however *not well-studied* because many understandings are still lacking.
To illustrate this more precisely, consider [1] which is a very recent progress (NeurIPS'23): the motivation was to understand Gradient Descent (not gradient flow, which was easier) for this problem, but due to technical difficulty that was not accomplished; instead, the authors managed to work out a variant, namely Alternating Gradient Descent, and obtained convergence rate. Now, in our work, we not only work out Gradient Descent but also its momentum version, which is more complicated to analyze but beneficial, because we quantitatively prove that Nesterov's momentum indeed accelerates convergence. This requires *new analysis techniques* (as discussed in Remark 1) and rather precise (if not tight) error bounds, and we're glad it can work out.
[1] Ward, Rachel, and Tamara Kolda. "Convergence of alternating gradient descent for matrix factorization." Advances in Neural Information Processing Systems 36 (2023): 22369-22382.
> W2: There are some small typos that are quite annoying.
Thank you very much for helping us catch them.
Although we intentionally put one $|A|_F$ in the denominator of the prefactor so the other $|A|_F$ can align with the definition of relative error (lines 124 and 140), the expression indeed becomes more confusing.
We will cancel out $|A|_F$ in Theorem 1 and 2 and $|LD^\top|$ in Theorem 3 to avoid confusion.
The GD on line 140 is indeed a typo. We will proofread and fix it along with other typos in the next version.
> Q1: Why the convergence rate is independent of $|A|_F$?
While the convergence rate does not explicitly depend on $|A|_F$, $|A|_F$ will still affect the convergence rate through $c$ defined in Eq. (2). Theorems 1 and 2 require $c^2\geq\underline{c}^2=O(|A|_F)$, and the results show $|R_t|_F$ depends on $c^2$. When we choose $c=\underline{c}$, the bound will have an explicit dependence on $|A|_F$. When we choose $c>\underline{c}$, $|A|_F$ is dominated by the factor $c^2$, hence it is implicitly contained in our results in lines 123-144 and 139-140.
Moreover, when considering the iteration complexity, we adopt relative error as the metric, i.e., $|R_t|_F/|A|_F\leq\epsilon$. Therefore, the right-most $|A|_F$ in the line between 123-124 (and the line between 139-140) does not appear in the iteration complexity.
> Q2: Can you explain the line 807? Why the eigenvalues of $\lambda_i (T_{GD}) (1\leq i\leq (m-r)n)$ can be ignored?
Thanks for the opportunity to correct a critical misunderstanding. It is *not* because eigenvalues are ignored, but because $\left<v,v_i\right>=0$. By Lemma 1 (line 188), $\mathcal{H}$ is the eigen subspace corresponding to positive eigenvalues of $H_0$, which is orthogonal to the kernel subspace of $H_0$. Through the derivation from line 804 to 807, we know that $\{v_1,\dots,v_{(m-r)n}\}$ exactly spans the kernel subspace of $H_0$ (where $\lambda_{mn-i}(H_0)=0$). Given the condition that $v\in\mathcal{H}$, we get $\left<v,v_i\right>=0$ for $i=1,\dots,(m-r)n$, hence the first $(m-r)n$ terms vanish.
Thanks to the reviewer we realize this was under-explained, and will add an explanation about this in line 808 in the next version.
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you for your time in reviewing our manuscript and offering valuable comments. We've addressed the weaknesses and questions raised in your review with our author rebuttals. We'd like to know if these responses have sufficiently addressed your concerns. If any points require further clarification or discussion, please let us know.
Thank you again for your effort.
Best wishes,
Authors
---
Rebuttal 3:
Title: Thank you for the comment
Comment: Thank you for the response. I will stick to my original score.
I appreciate that the author agrees with my suggestions on the presentation improvement and promise to make the revisions accordingly. | Summary: The paper calculates convergence rates of the gradient descent and the Nesterov's accelerated gradient descent algorithms for factorization of rectangular matrices- a nonconvex optimization problem. Their analysis is for algorithms when the factor matrices are initialized as follows: one matrix is initialized as the original matrix multiplied with Gaussian random matrix with appropriate scaling and the other is a zero matrix. They show convergence rates with quadratic dependence on condition number of the data matrix for GD and linear dependence for NAG. They also extend their analysis to linear neural networks and back their theoretical findings with empirical results.
Strengths: 1) Gradient descent algorithm and its variants are used almost everywhere in machine learning. Deep neural networks have made nonconvex optimization also very frequent in ML and matrix factorization is a basic problem. As such a work providing theoretical guarantees for the problem is very relevant.
2) Although the paper is difficult to read, it is structured in a very nice manner so that a reader can follow the high-level ideas quite clearly. The related works and the placement of this work among existing works is discussed very clearly.
3) I could not check the proofs in detail, but enough intuition, proof sketch and moderate level of details are provided in the main paper itself. The ideas appear solid and sound.
Weaknesses: 1) The experimentation is done on very small matrices. In practice much larger rectangular matrices are often encountered. It would give more insights if the experiments were also performed with moderate and large size matrices.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your positive feedback and constructive comments.
> W1: The experimentation is done on very small matrices. In practice much larger rectangular matrices are often encountered. It would give more insights if the experiments were also performed with moderate and large size matrices.
Thank you for your suggestion. We add additional experiments on large-scale matrices. In particular, we set $(m,n)=(1200,1000)$ for matrix factorization and $(m,n,N)=(500,400,600)$ for linear neural networks in the additional experiment. We compare the performances of GD and NAG while keeping other settings the same as in Figure 2 in the paper. The results are plotted in Figure 4 in the PDF file attached to "Author Rebuttal". As illustrated in Figure 4, our conclusion in the paper that (1) NAG performs better than GD and (2) overparameterization accelerates convergence remains valid for large matrices.
---
Rebuttal Comment 1.1:
Comment: I have read the other reviews and the rebuttal. I will keep my score. | Summary: In this papers the authors analyze the convergence of Nesterov Accelerated Gradient algorithm for a) rectangular matrix factorisation and b) linear neural networks. By using imbalanced initialisation, the authors come up with linear rates of convergence impoving upon state-of-the-art regarding dependence of the rates condition numbers of sought matrices.
Strengths: - The paper is well written and easy-to-follow.
- The analysis of convergence of NAG on rectangular matrix factorisation and linear neural networks is an interesting topic of research.
- The authors improved upon state-of-the-art results using novel technical approaches and elegant proof techniques.
Weaknesses: - The authors didn't mention relevant works studying imbalance effect on the similar problems e.g. [1].
- The results address the linear neural network case but it's not obvious how the analysis could be extended to account for non-linearities or non-smooth activation functions.
[1] Min, Hancheng, et al. "On the explicit role of initialization on the convergence and implicit bias of overparametrized linear networks." International Conference on Machine Learning. PMLR, 2021.
Technical Quality: 4
Clarity: 3
Questions for Authors: - In [1] the authors provided rates of convergence of gradient flows in the imbalance initialisation regime. Could these results provide some insights on how to generalise the derived rate for different amount imbalance of imbalance?
- Could the authors provide some insights on whether the theoretical results on linear neural network could be extended to non-linear and possibly non-smooth activation functions?
Minor:
Eq. under line 132: Should $x_t$ be replaced by $z_t$ (??
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your positive feedback and constructive comments. Our responses to each of the comments are listed below.
> W1\& Q1: The authors didn't mention relevant works studying imbalance effect on the similar problems e.g. [1]. In [1] the authors provided rates of convergence of gradient flows in the imbalance initialisation regime. Could these results provide some insights on how to generalise the derived rate for different amount imbalance of imbalance?
We thank the reviewer for pointing out the related work, and we will cite [1] and discuss it in the next version.
In particular, our settings and proof techniques differ from [1]. Consequently, their results cannot directly translate into our case.
1. We consider GD and NAG with step size $O(1/L)$, while [1] considers GF with infinitesimal step size. Without carefully characterizing discretization error, the result in [1] for GF cannot be applied to GD.
2. In our proof, the imbalance initialization guarantees the induction steps to be valid. The amount of imbalance will affect the constant factors but will not affect the convergence rate $(1-\frac{\mu}{L})^t$ (or $(1-\sqrt{\frac{\mu}{L}})^t$). To be more explicit, suppose we initialize $X_0=c_1A\Phi_1\in\mathbb{R}^{m\times d}$, $Y_0=c_2\Phi_2\in\mathbb{R}^{n\times d}$, then by replacing $H_0$ in Proposition 2 with $H_0^\prime=I\otimes (X_0X_0^\top)$, we can generalize the proof in lines 814-815 and 821-832. The induction requires a sufficiently large $c_1$ and a relatively small $c_2\leq O(c_1)$. Meanwhile, by Proposition 4.3.2 in [2], we have $$\frac{1}{1-O(c_1c_2)}|A_0|_F\leq|R_0|_F\leq (1+O(c_1c_2))|A_0|_F$$ with high probability. Therefore, when $c_1$ is fixed, a smaller $c_2$ yields a smaller initial loss, resulting in a smaller constant factor. However, the convergence rate remains the same, as it depends on the extreme non-zero eigenvalues of $H_0^\prime$, i.e., $\mu$ and $L$, which only relates to $c_1$ but not $c_2$. We conduct additional experiments on GD/NAG with different values of $c_2$, and the numerical results support our claim. Please find the experiment details and results in "Author Rebuttal" and Figure 6 in the PDF file.
> W2\& Q2: The results address the linear neural network case but it's not obvious how the analysis could be extended to account for non-linearities or non-smooth activation functions. Could the authors provide some insights on whether the theoretical results on linear neural network could be extended to non-linear and possibly non-smooth activation functions?
Extending our results to neural networks with non-linear (and possibly non-smooth) activations is an interesting topic that we want to investigate in the future.
While non-linearity complicates the analysis, we believe some of our techniques still apply.
Suppose we apply non-linear activations $\sigma(\cdot)$ and the problem becomes $\min_{X,Y}|L-X\sigma(Y^\top D)|_F^2$. In our analysis, one of the key steps is to show that the residual and error terms are in the contraction subspace of the dynamics.
Since the second layer is linear, by sketching initialization $X_0=cL\Phi$ we can still get $X_0$ that shares the same column space with $L$. By checking the dynamics of GD/NAG, we can verify that this space contains the column space of residuals and errors of all later iterations.
However, due to the non-linear activation, the analysis of the linear part of the system and error terms becomes complicated.
It requires further effort to verify whether our results can successfully generalize to the non-linear activation setting.
> Q3: Eq. under line 132: Should $x_t$ be replaced by $z_t$?
Thank you for pointing this out. This is indeed a typo and we will replace $x_t$ with $z_t$ in the next version.
[1] Min, Hancheng, et al. "On the explicit role of initialization on the convergence and implicit bias of overparametrized linear networks." International Conference on Machine Learning. PMLR, 2021.
[2] Ward, Rachel, and Tamara Kolda. "Convergence of alternating gradient descent for matrix factorization." Advances in Neural Information Processing Systems 36 (2023): 22369-22382.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal to my comments and for the additional experiments you conducted to explore how imbalanced initialization affects the rate of convergence. I will keep my score as it is. | Rebuttal 1:
Rebuttal: We thank all the reviewers for dedicating their time to reviewing our paper and providing valuable feedback.
In response to the comments involving the size of the matrices (by reviewer t3Co), the performance of GD/NAG with different values of $c$ (by reviewer u7PK), and the amount of imbalance (by reviewer SNTq), we conduct additional numerical experiments and show the results (Figures 4 to 6) in the attached PDF file. We discuss each of them below:
> W1 by t3Co: The size of the matrices is small.
Our original submission uses $(m,n)=(100,80)$ for matrix factorization and $(m,n,N)=(100,80,120)$.
In this rebuttal where we investigate larger-sized problems, we set $(m,n)=(1200,1000)$ for matrix factorization and $(m,n,N)=(500,400,600)$ for linear neural networks. We compare the performances of GD and NAG under the same setting as in Figure 2 with moderate/large matrices in these sizes. The results are provided in Figure 4. As illustrated, the conclusion that (1) NAG performs better than GD and (2) overparameterization accelerates convergence remains valid for large matrices.
> Q8 by u7PK: The performance of GD/NAG with different values of $c$.
We conduct additional experiments on GD and NAG with different values of $c$ and plot the results in Figure 5.
As illustrated, when $c$ is sufficiently large, increasing $c$ further has little effect on the convergence rate, which is consistent with our theory.
> Q1 by SNTq: The effect of the amount of imbalance at initialization.
We conduct additional experiments on GD and NAG with initialization $X_0=c_1A\Phi_1\in\mathbb{R}^{m\times d}$, $Y_0=c_2\Phi_2\in\mathbb{R}^{n\times d}$, where $[\Phi_1]\_{i,j}\sim{N}(0,1/d)$ and $[\Phi_2]_{i,j}\sim{N}(0,1/n)$. We keep $c_1=50$ and set different values of $c_2$. The results are plotted in Figure 6. As illustrated, changing $c_2$ within a range will not significantly affect the convergence rate (slope), but will change the initial loss (intercept). This result supports our claim in the individual response to reviewer SNTq.
In the next version, we will add these results and discussions in an "Additional Experiments" section in the appendix.
For responses to other comments, please refer to our responses to reviewers.
Pdf: /pdf/f06352f9c39d039eb85ec0036c88c06fe9a731b7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MUVERA: Multi-Vector Retrieval via Fixed Dimensional Encoding | Accept (poster) | Summary: The paper studies how to optimize multi-vector retrieval under the MaxSim similarity function (commonly used in NLP literatures like ColBERT). This is done by mapping queries and item embeddings asymmetrically to fixed dimensional embeddings (FDEs), such that in the mapped embedding space, the inner product similarity is a good approximation of the original MaxSim similarity. This approach then enables classical MIPS techniques to be applied to optimize for MaxSim similarity search. The authors compare FDEs with PLAID / SV heuristics, and found it to outperform those optimized similarity search baselines.
Strengths: **S1**: I really like the MIPS approximation construction for MaxSim in Section 2. While the techniques used are not that novel (e.g., Section 2 heavily reminds me of the classical AND- and OR- constructions on top of LSH etc.), being able to reduce MaxSim to MIPS is likely of significant interest to many researchers in NLP and vector search community.
**S2**: Reducing MaxSim to MIPS enables many classical optimizations for MIPS to be applied incl PQ, RP (line 177-180) etc, likely significantly improving the usability and applicability of ColBERT-style approaches.
**S3**: MaxSim similarity (and related setting under late-interaction settings) is an important topic in NLP, as inner products have been proven to be suboptimal for similarity search, especially when we are interested in Recall@K for small Ks (e.g., 10s-50s).
Weaknesses: **W1**: With FDE, the MaxSim approach (used in ColBERT and other work)'s effectiveness is degraded dramatically, to the point that it underperforms dense retrieval baselines. E.g., Figure 3 (LHS) suggest that Recall@100 of FDE is quite bad vs Exact MaxSim in ColBERT (up to ~.84, vs ~.92). The main issue here is that, gains of non-MIPS similarity approaches (like MaxSim and more recently DSI/NCI etc.) are primarily on smaller @Ks (e.g., ColBERT v2's quality gains are primarily demonstrated on Metric@10 in their original paper). The current paper only reported Recall@K={100, 1K, 10K}, which are not aligned with intended use cases for ColBERT (as it didn't provide that much gains over dense baselines for these Ks).
- Even at 100, the 1-.84/.92 = 0.087 degradation in recall will likely make MaxSim+FDE underperform classical dense retrieval baselines. E.g., on MS MACRO, DPR achieves 82.2 R@50 whereas ColBERT achieves 82.9 R@50 per [1] (and gap @100 will be smaller). So adjusting for FDE's quality degradation, we get 82.2 for dual encoder baselines but just 82.9 * .84/.92 = 75.69 for MaxSim, and people will ask - why not use classical dense encoders like DPRs to begin with?
- Additionally, to achieve the ~.84 recall, FDE encodings already used 20K floats vs 10,087 in MS MACRO multi-vector baselines per Fig. 3...
**W2**: Random projection techniques (incl. their multi-hash variants, or $R_{reps}$ in this paper), and production quantization techniques used in FDEs are commonly used in MIPS systems, and generally universally reduce memory access costs and improve throughput/latency, etc. of nearest neighbor methods (after hparam tuning). Have we evaluated applying random projection and PQ techniques directly to baselines like ColBERTv2, PLAID, etc.? How much of the throughput gains presented in the paper are due to RPs/PQs?
**W3**: Writing could be improved.
- It would be good to add a table of notations, at least to appendix to help with references. Even though I work on vector search (incl multi-vector search and variants), I ended up spending quite some time figuring out what the authors meant by various notations in Section 2.
- Line 141-142: "To resolve this, we set ~p(k) to be the centroid of the p ∈ P’s with ϕ(p) = ϕ(q)". - This sentence is confusing as the FDE mapping for query (p) should not be item (q) aware. Please consider rewriting it to reflect the actual mapping done in Equation (3).
- Section 3: consider removing codebase/dataset licenses (CC BY-SA MIT etc.) as most readers may not find them useful.
- Line 311 - "We run latency experiments using a single thread, and run our QPS experiments on all 176 threads." Could you explain why latency experiments are run with a single thread? As using additional threads should further improve latency.
- Typos: Line 132-133 are more are more likely to land in -> are more likely to land in
**References**:
- [1] Ren et al. PAIR: Leveraging Passage-Centric Similarity Relation for Improving Dense Passage Retrieval. ACL 2021
Technical Quality: 2
Clarity: 3
Questions for Authors: **Q1**: Would it be possible to report effectiveness of FDE + MaxSim vs corresponding baselines (incl dense retrieval and ColBERTv2 etc.), esp. for common Ks (10, 50, 100)?
**Q2**: Can we equally apply techniques like PQ and random projection to baselines (again ideally dense retrieval with a comparable embedding dimensionality to normalize I/O costs), and evaluate how much throughput gains are due to PQs/RPs vs the particular construction of FDEs proposed in this paper?
**Q3**: FDE's approximation quality seems to heavily rely on how the B clusters are defined. If B is large: a) the overall dimensionality is large, requiring significantly more memory accesses, and b) Max(P, Q) may not map to anything. OTOH if B is small, many false positives collisions may occur, resulting in (potentially significant) overestimation of the MaxSim similarity function. Is there a reliable way of grid searching B (other than in 2^k as done in Sec 3.1)? Could we discuss this somewhere in the paper?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments and suggestions. We reply to the main questions and concerns below.
> W1:
We first note that the FDE technique, like the SV Heuristic, is an approach for multi-vector retrieval that must be followed by a re-ranking step with the exact Chamfer similarity. Therefore, the raw recall of the FDE method should not be compared to the brute-force ColbertV2 recall (or to other dense retrieval baselines), but rather to other alternative methods for MV over-retrieval. The primary such alternative (used in PLAID and others) is the SV Heuristic, and we show that FDE’s are significantly more efficient than this method by a factor of 2.6-5x. The message of Figure 3 is that by using a (e.g.) 2.5k dimensional FDE, retrieving 1000 candidates and reranking, one can obtain near-optimal recall@100 with respect to the brute force Chamfer (see Figure 3, middle, at the point where the FDE pareto curve passes the red “Exact Chamfer R@100” line). In contrast, the SV heuristic needs to retrieve many more (2k-5k) candidates to achieve the same recall after reranking.
> “MaxSim+FDE underperform dense retrieval baselines.”
We emphasize again that our model is *not* returning the 100 candidates retrieved by the FDE. Instead, it would retrieve more candidates, say 500 or 1000, rerank them with Chamfer, and output the top-100 after reranking. This gives us significantly better R@100 than 0.84, for instance, in Figure 7, in our end-to-end evaluation, we achieve .902 R@100 and 0.971 R@1000 on MS Marco, both of which are much closer to the brute-force ColBERTv2 recall (within 1-2%). Also note that this matches the recall obtained by PLAID in those experiments. Also note that PLAID (which uses the SV heuristic for initial retrieval) must also perform the same over-retrieval and reranking steps.
> “Additionally, to achieve the ~.84 recall, FDE encodings already used 20K floats vs 10,087 in MS MACRO multi-vector baselines…”
The same above point about reranking applies to this comment as well. Specifically, if we wanted to achieve end-to-end 0.84 recall, we would not use the raw output of the 20k dimensional FDE’s. Instead, we could use the (e.g.) 960 dimensional FDE, which attains .855 R@1000 (shown in Figure 3), retrieve 1000 candidates, and rerank to output the top 100. This FDE would then be 10x smaller than the 10k floats used in the MV baseline. Alternatively, we could use our 4k dimensional FDE’s, which achieve 0.88 Recall@500, and retrieve 500 and re-rank to get roughly the same result (if not better).
With regards to the choice of k used in our Recall@k experiments, we point out that the ColBERT paper also studied similar values of k, namely (50,200,1000), as well as the PLAID paper (100,1k). Since 100,1k were the metrics considered by PLAID, our primary baseline, our end-to-end recall results focused on the same values, although we would be happy to add additional points of comparison at low recall ranges.
> W2 + Q2:
With regards to using product quantization in baselines like PLAID or ColBERTv2, we point out that those papers *extensively* utilize quantization methods in their algorithm already. Specifically, the ColBERTv2 retriever compresses points by representing each point by a nearby centroid plus a compressed residual (1-2 bits per coordinate), where the centroids are sampled from the dataset in pre-processing. PLAID, which is built on top of ColBERTv2, also uses the same compression scheme with additional optimizations. Thus, these methods crucially use complex and highly tailored quantization methods. One of our main contributions is to show that an out-of-the-box textbook product quantization method can be used to achieve similar results when applied to FDE’s.
> “how much throughput gains are due to PQs/RPs vs the particular construction of FDEs proposed in this paper?”
In Figures 14 and 15 we address exactly this question, by plotting Recall vs QPS curves for different datasets and different PQ methods, including uncompressed methods. The figure shows that using PQ gives significant throughput gains, which are needed to be competitive with PLAID (which, as discussed above, also extensively uses quantization techniques).
> W3:
We greatly appreciate the suggestions for improving the writing, and will be sure to incorporate them.
> Q1:
As discussed above, we compare FDE’s without any reranking brute force ColBERTv2 in Figure 3, for k=100 and 1000, as these were the Recall@K metrics considered in the PLAID paper. We will add additional experiments for smaller k to the paper as well. Since our work focuses on designing a standalone retrieval algorithm for MV models, we do not compare to IR methods (such as SV dense retrievers).
> Q3:
This is a fantastic question, and addressing it is a key contribution of our work. Specifically, we introduce the method “fill_empty_clusters” on Page 5 (see 166-180) to solve this problem, which ensures that at least one document vector maps to every cluster. As a result, there are no empty clusters on the document side, so there can be no “misses”. This makes tuning B easy – increasing B will always improve performance, so increase it as large as possible within your dimension constraints. This is also illustrated in the grid search experiments, where increasing B improves performance. We will emphasize this further by referring back to the paragraph on Page 5 in the section on grid searching.
> “... (potentially significant) overestimation of the MaxSim similarity function.”
It is certainly true that setting B too small could result in collisions, but we note that this could actually only result in an underestimation. In fact, we prove that our method never over-estimates (Lemma A.3), since every query vector will get matched to some document vector (or none, if fill_empty_clusters is not enabled) which can only result in an underestimate of the best similarity.
---
Rebuttal Comment 1.1:
Title: some additional questions
Comment: Thanks for providing the clarifications. I have some additional questions based on the authors' responses:
* inference setups used w/ FDE + MaxSim: thanks for your response to W1 as I was confused about the exact inference setup earlier. polishing writing further with a paragraph highlighting FDE's inference setup somewhere would be useful. Some comments:
* ColBERTv2 and PLAID are more *multi-vector* SV baselines, whereas the inference setup in this paper differs by doing lightweight prefiltering w/ FDE SV to a larger set. This reminds me of retrieve-then-rerank in IR which is a reasonably common setup and may need discussions/comparisons. Some related work in this area might include Learning Query-dependent Prefilters for Scalable Image Retrieval. CVPR'09 [1], Revisiting Neural Retrieval on Accelerators. KDD'23 [2] etc. In particular [2] also proposed to retrieve with learned single-vector SV followed by re-scoring with learned similarities. It might be useful to compare learned SVs which seem to work for small ds (64d in [2]) vs FDE constructed SVs (2k-20k dimension here, albeit different datasets etc)?
* writing: would using "multi-vector heuristics" to characterize the method in ColBERTv2/PLAID make more sense, given prior work have explored two-stage SV for learned distance functions, and learned SV seems like a reasonable/competitive baseline here?
* it might be useful to report the K' and FDE dimensionalities $d$s used for prefiltering the candidate sets (per your comment *"... we wanted to achieve end-to-end 0.84 recall, we would not use the raw output of the 20k dimensional FDE’s. Instead, we could use the (e.g.) 960 dimensional FDE"*), and how this affects final end-to-end recalls to help with understanding.
* efficiency experiments: double checked the papers, it seems that we didn't use the same quantization scheme across ColBERTv2/PLAID vs this work. eg this work uses PQ-256-8 which compresses each float to 1 bit (line 307-308) whereas PLAID compresses each float to 2 bits ("For both vanilla ColBERTv2 and PLAID ColBERTv2, we compress all datasets to 2 bits per dimension, with the exception of MS MARCO v2 where we compress to 1 bit."). It would be helpful to control for related factors somehow, given retrieval efficiency is frequently memory-bandwidth bound.
* Re Q5: thanks for pointing out fill_empty_clusters. But won't fill_empty_clusters break the FDE construction in Sec 2?
---
Reply to Comment 1.1.1:
Comment: We are glad that our explanation helped clarify some matters about our retrieval setup, and we will be sure to help clarify the exact setup further in the paper. We note that our main figure 1 (on page 2) shows the retrieval and reranking process, where the Chamfer reranking is a key step – we will add a further discussion on the importance of reranking in 1.2 where it is currently described.
“multi-vector SV baselines”
What does it mean to be a multi-vector SV baseline? We were slightly confused by this description and would like to understand better.
“It might be useful to compare learned SVs which seem to work for small ds”
We agree that training a SV model to approximate the Chamfer similarity is an interesting and important direction for future work. However, as discussed in the global response, the focus of our paper is to design an improved standalone retrieval algorithm for multi-vector databases. Like PLAID, we assume we are already given as input a multi-vector database (without necessarily having a training set needed for distillation), and we need to find the approximate nearest neighbors under the Chamfer Similarity for the MV embeddings in that dataset. Thus, comparing with other SV models, such as those distilled from a re-ranker, is somewhat out of the scope of the current paper.
We also agree that pre-filtering with SV before a more complex or learned similarity is an popular and important method in the IR literature, and we will be sure to add references and discuss this alternative further in the paper.
“multi-vector heuristics”
We used the term “SV heuristic” to emphasize that the retrieval stage was only taking into account interactions between single vectors, and not aggregate interactions between sets of vectors (i.e. MV), which is what makes MV models unique. Moreover, we wanted to describe a *specific* heuristic which was to retrieve the top-k for each single vector and then aggregate them. We agree with the reviewer, though, that there may be a better term to describe this method, and will attempt to change it in the final version to something more descriptive.
“it might be useful to report the K' and FDE dimensionalities “
In the sentence on lines 332-336, we report that we used 10k-dimensional FDE’s with PQ-256-8 (32x compression) for all the latency experiments, and that we set K’ (=(# reranked)) to be equal to the beam width W (which is variable, since we tune so that our recall matches the recall of PLAID for each dataset). We will be happy to add the specific value of the beam-width K’ = W used for each of the six datasets in Figure 7 to give a more complete picture of the parameters.
“quantization scheme across ColBERTv2/PLAID vs this work”
We note that the standard PQ technique we use in this paper is different than the 1-2 bit compression scheme used in PLAID/ColBERTv2. Standard PQ-256-8 compresses chunks of size 8 into one of 256 centroids, whereas ColBERT/PLAID used many more centroids (2^32 or 2^16) plus a residual with 1 or 2 bits per dim. One advantage of PQ-256-8 is that using fewer centroids allows us to pre-compute a codebook that will fit in cache, allowing for very fast scoring (whereas 2^16 centroids would be too large of a codebook to fit in cache).
“It would be helpful to control for related factors somehow”
We note that we do conduct QPS experiments in C.4 considering different compression schemes, including PQ-256-4 which would compress blocks for 4 coordinates into one byte (so 2 bits per float). We found PQ-256-8 to offer a better QPS vs Recall tradeoff, since scoring is faster with a more compressed vector (and not much recall is lost). Since there are several differences between our approach and PLAID — (1) we are using a graph-based index for retrieval instead of IVF, which have different memory access patterns (IVF enjoys more sequential accesses) and (2) we are doing PQ mainly on the FDE’s and they are doing the PQ on the original 128-d vectors — it would be difficult to compare the two methods in precisely the same setting. However, our PQ-256-8 and PQ 256-4 with a similar number of retrieved candidates per query vector (and using 10k-dim vectors) would give the same number of bits retrieved from memory on MS Marco as PLAID (although note that our FDE vs. SV Heuristic section shows that FDE’s need to retrieve fewer candidates to get the same recall).
“won't fill_empty_clusters break the FDE construction in Sec 2?”
We would like to emphasize that fill_empty_clusters is part of Section 2. Thus, it is used in the theorems that we prove and in all of our experimental results. Thus, far from breaking the FDE construction in Sec 2, fill_empty_clusters is actually a core part of the FDE construction in Sec 2.
We thank the reviewer again for this valuable discussion. If the reviewer agrees with the above points of clarification, we encourage them to reevaluate their score. | Summary: Efficient vector retrieval to maximise inner product similarity is well studied, and this paper explores the issue of multi-vector retrieval to support late interaction models like Colbert. The core idea is to use SimHash to generate clusters of multiple representation of documents, represent each document's vectors in a cluster using a centroid, and use that with the centroid of query vectors that land in the same cluster. Authors rely on the SimHash's approximation theory to show that this approach approximates the maxSim measure used by Colbert. Experimental results indicate Muvera outperforms PLAID in some of the datasets from BEIR in terms of query latency and Recall@k.
Strengths: s1. Simple yet effective idea that seems to work on a variety of datasets.
s2. Well written, and experiments are well conducted.
Weaknesses: w1. the theoretical analysis of the presented approach is rather limited. SimHash theory focuses on EMD minimisation while the focus here is on Chamfer similarity maximisation. So, it would be good detail the theory further.
w2. despite multiple stages listed for PLAID, the performance of MUVERA is not particularly strong -- in terms of latency, PLAID outperforms MUVERA on largest dataset (MS-MARCO). It leads one to question if the observed latency gains are primarily due to small(er) dataset sizes? Similarly in terms of recall@k, MUVERA outperforms PLAID only in a couple of datasets.
w3. there is no mention of how far from ideal (i.e, ColbertV2 performance) is the proposed model?
Technical Quality: 3
Clarity: 3
Questions for Authors: q1. I would have liked to see the connection with the EMD approximation theory of SimHash with the Chamfer distance approximation presented here. In general, it seems to me --authors can clarify if I am mistaken-- the key innovation is the use of centroids in each bucket, instead of the standard O(n^2) similarity computation done even with SimHash in applications such as shingling. It would also be useful to show the sensitivity to the value of B = number of partitions.
q2. Improvements over PLAID are not significant in my opinion -- in terms of both latency and recall @ k. There are only few datasets (NQ-{100,1000}, HotPotQA-{100, 1000} the recall is better, and latency is worse in ms-marco (a commonly used Colbert benchmark). Further, it would be also worthwhile comparing with the baseline of ColbertV2 (without using any of these efficient indexing techniques).
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Given that the work is aimed at improving the performance of an existing retrieval framework, I do not expect to see any specific negative societal impact from the work. Authors also have given similar remarks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments and suggestions. We reply to the main questions and concerns below, and attempt to clarify several points.
> W1: “SimHash theory focuses on EMD minimisation while the focus here is on Chamfer similarity maximisation”.
We would like to point out that this is not accurate. SimHash as a Locality Sensitive Hash (LSH) function dates back to the paper “Similarity Estimation Techniques from Rounding
Algorithms” (Charikar, STOC ‘02), where it is used to estimate the cosine similarity between single vectors. In fact, to our knowledge SimHash has not been previously used for theoretical guarantees for EMD in any work. Instead, theoretical results for sketching EMD, such as the seminal “Algorithms for Dynamic Geometric Problems over Data Streams” (Indyk STOC ‘04) or “Earth Mover Distance over High-Dimensional Spaces” (Andoni, Indyk, Krauthgamer SODA ‘08) use L1 distance LSH’s, which are based on random hypergrid shifts, instead of SimHash. However, if the vectors are normalized (as they are for us) then dot product similarity and cosine similarity are the same. This is why we use SimHash for our theoretical (and practical) algorithm, because Chamfer similarity is an aggregate over the max dot product / cosine similarity of each individual vector.
> “So, it would be good detail the theory further.”
Could you please clarify what part of the theory you would like to see be explained further so that we can do so? Note that our theorem proves an eps-approximation of the Chamfer similarity with the FDE dot product, which is essentially the best result one could hope for in a reduction from Chamfer similarity to dot-product similarity. Our method for embedding a set of vectors into a single vector is novel, and has not been used in prior work.
> Q1:
Firstly, we emphasize that the theoretical bound in our paper (Theorem 2.1) applies to the Chamfer Similarity, which is exactly the similarity considered in the rest of the paper. We reiterate that SimHash itself is a Locality Sensitive Hash function for cosine similarity as described above, not for EMD. Thus, there is no direct connection between our theoretical results and known sketching techniques for EMD. The only similarity to sketching techniques for EMD, which is mentioned in the paper, is that sketches for EMD also take the approach of embedding a set of vectors into a single vector using LSH, but the approach for doing so is dramatically different due to the significant differences between EMD and Chamfer (e.g., Chamfer is asymmetric and has no matching constraint).
> “-the key innovation is the use of centroids in each bucket, instead of the standard O(n^2) similarity computation done even with SimHash in applications such as shingling. “
While the usage of centroids for the construction of the document-side FDE’s is important, it is not the key innovation in our paper. Specifically, the goal of our paper is not to avoid a O(n^2) similarity computation (where n is the size of the query set), but rather to provably embed Chamfer Similarity into dot-product similarity. SimHash is used as a method to turn a multi-vector similarity (where different vectors can match with each other) to a single vector similarity, which fixes the ordering in which the coordinates align. The key idea is to use SimHash to partition the space so that close points always end up in the same bucket, and therefore the same block of coordinates and can be matched. For the theory, the centroid used in the document FDE could have been equivalently replaced with any point from the document that landed in that partition and the theoretical bounds would still apply. Another key insight novel to this work is the “fill_empty_partition” method, which assigns the nearest document point to any empty cluster to ensure that increasing the number of buckets does not decrease quality.
> W2
The dataset size alone does not explain the performance of PLAID vs MUVERA on MS-MARCO, because note that MUVERA significantly outperforms PLAID on HotpotQA (5.2M documents) and NQ (2.7M documents) which are both on the same order of magnitude as MS-Marco (8.8M Documents). Instead, as discussed in the paper, note that PLAID was highly optimized for MS Marco, and was the culmination of multiple papers (ColBERT, ColBERTv2, PLAID), that successively optimized retrieval on the MS MARCO dataset. Our method, on the other hand, is not fine-tuned in any way for MS MARCO, and the same parameters achieve good results on all of our datasets without re-tuning. With regards to recall@k, in our latency experiments we set our beam width so that our recall matches the recall of PLAID, so that we can primarily compare the latency at a given recall. The goal in these experiments was not to outperform PLAID’s recall, but rather to outperform it in latency at a fixed recall.
> W3: “there is no mention of how far from ideal (i.e, ColbertV2 performance) is the proposed model?”
We actually do compare to the ideal ColbertV2 Performance (i.e., the performance obtained by a brute-force search using Chamfer similarity) in the paper. Specifically, in Figure 3 we show the baselines for “Exact Chamfer@N” for various N, which is precisely the brute-force ColbertV2 performance. The main insight from this figure is that one can over-retrieve with FDE’s and then re-rank to obtain near-ideal or ideal performance (i.e. matching ColbertV2) by using an FDE with recall value (given by the dots) above the corresponding line in the plot. We do this in our end-to-end experiments, where we achieve, for instance, where we obtain 90.2 R@100 and 97.1 R@1k for MS MARCO (Figure 7), which can be compared to 91.4 and 98.3 for ColBERTv2 (as reported in Figure 3 and also in the PLAID paper). Interpretation of Figure 3 is further discussed in our response to reviewer Z5yi.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by authors
Comment: I thank the authors for providing detailed responses to my questions.
> SimHash as a Locality Sensitive Hash (LSH) function dates back to the paper “Similarity Estimation Techniques from Rounding Algorithms” (Charikar, STOC ‘02), where it is used to estimate the cosine similarity between single vectors.
In (Charikar, STOC'02) EMD approximation using LSH is given in Section 4. I disagree that the paper dealt only with cosine similarity between single vectors. If you believe that your analysis is significant improvement or different over the analysis given there, please clarify.
Since some of my other queries are dependent on this aspect, I would like to hear from the authors before I update my rating.
> We actually do compare to the ideal ColbertV2 Performance (i.e., the performance obtained by a brute-force search using Chamfer similarity) in the paper. Specifically, in Figure 3 we show the baselines for “Exact Chamfer@N” for various N, which is precisely the brute-force ColbertV2 performance.
Thanks for pointing this out. The computational cost for over-retrieving (upto 10k for reaching ColbertV2) and reranking is not given. latency and QPS plots seem to be for retrieving 1000 (please clarify if I am incorrect), and how does this over-retrieval compare with the performance of ColbertV2?
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer again for this discussion and valuable comments.
>”In (Charikar, STOC'02) EMD approximation using LSH is given in Section 4. I disagree that the paper dealt only with cosine similarity between single vectors. If you believe that your analysis is significant improvement or different over the analysis given there, please clarify.”
Note that we did not claim that the (Charikar, STOC'02) only considered cosine similarity between single vectors. Instead, we stated that *SimHash*, which is a particular LSH function, was only used in that paper to estimate cosine similarity. For completeness: SimHash is a random hash function h:R^d → {0,1} which maps a d-dimensional vector to a single bit {-1,1} by (1) drawing a random gaussian vector g ~ R^d and (2) outputting the sign bit of the dot product <x,g>, so SimHash(x) = Sign(<g,x>). The reviewer is correct that the Charikar paper gives algorithms for sketching EMD, but they are *not* based on SimHash.
Let us attempt to clarify the differences in the historical approaches to sketching EMD vs. the approach in our paper. In the Charikar paper, as well as all other papers on sketching EMD, the main technique is called probabilistic tree embeddings (e.g. Bartal trees or FRT embedding, see e.g. Bartal, Yair. "Probabilistic approximation of metric spaces and its algorithmic applications."). This technique creates a randomized embedding F from the original metric ( R^d for us, but also works for any metric) into a tree metric T, such that the shortest path distances in the tree T approximates the original metric distances. For two sets A,B, if we want to estimate EMD(A,B), we first apply F to each point in A and B, to get two sets of vertices F(A),F(B) in a tree. One then tries to compute the EMD in the tree; the key point is that EMD in a tree can be embedded isometrically into the L1 metric (over single vectors) via a simple folklore embedding. This is all fundamentally different from our approach, which does not use tree embeddings at all.
The above is the general recipe used for all algorithms for sketching EMD. The Charikar paper obtained a log(n) log log(n) approximation for any general metric. Indyk (STOC ‘04) improved this to O(log n) for the 2-dimensional grid (R^2), and (Andoni, Alexandr, Piotr Indyk, and Robert Krauthgamer. "Earth mover distance over high-dimensional spaces." SODA 2008) extended this to a log^2(n) for high-dimensional spaces R^d. These last two algorithms used an L1 distance LSH based on random Quadtree decompositions (which splits the space into random nested hypergrids), which is fundamentally different from SimHash.
For a more detailed overview of the history and techniques used in sketching EMD, see the overview Section 1.1, of (Rajesh Jayaram, Erik Waingarten, and Tian Zhang. "Data-Dependent LSH for the Earth Mover’s Distance." STOC 2024, https://arxiv.org/pdf/2403.05041).
Now for our results, note that we surprisingly are able to prove strong sketching results without the usage of probabilistic tree embeddings at all. Further, the reviewer is correct that we get a much better approximation: specifically, we get a (1+eps) approximation instead of a O(log n) or O(log^2(n)) approximation. This is possible due to our different techniques and the many differences between EMD and Chamfer (e.g. Chamfer is asymmetric, and does not satisfy triangle inequality). Some of the key differences are (1) we use an *asymmetric* LSH (encoding documents different from queries) based on SimHash (2) we do not use a tree embedding approach, but instead partition the space on a *single* granularity (unlike the nested partitions in tree embeddings) (3) we embed Chamfer into single vectors with the *dot product similarity* instead of the “L1 distance” – these two spaces behave very differently. The combination of these three different techniques, as well as the difference in the function we are trying to approximation (Chamfer vs. EMD) is how we get the improved approximation.
> “ The computational cost for over-retrieving... is not given ... how does this over-retrieval compare with the performance of ColbertV2?”
We emphasize that the experiments the reviewer is discussing in Figure 3 are part of our *offline experiments* that are meant to compare how over-retrieval with FDE’s compares to brute force (using the ColBERTv2 Model). Since this paper introduces a new sketching method (FDEs) we used this experiment to show how well FDEs approximate brute force max-sim (chamfer).
We note that we do spend substantial time in the paper discussing online retrieval experiments which show the tradeoff between recall and QPS (Figures 6 and 7, and more in the supplementary materials). These experiments compare our online retrieval solution to PLAID, which is the fastest retrieval mechanism for the ColBERTv2 model to date. Note that PLAID is orders of magnitude faster than brute-forcing ColBERTv2. | Summary: The paper proposes a method of speeding up text retrieval approximating ColBERT multi-vector ranker with very high dimensional single vector using projections.
Authors demonstrate that the proposed approach is better compared to a colbert-based single vector heuristic and is comparable to PLAID in terms of accuracy and speed, but is easier to tune.
Strengths: * The paper is relatively easy to follow (though using many acronyms like SV, MV, FDE makes it harder)
* The approach is more efficient at retrieval compared to the SV heuristic baseline
* Authors show that the method does not require much parameter tuning to achieve results comparable to PLAID.
Weaknesses: 1) The heuristic approach is reminiscent of [Morozov, Stanislav, and Artem Babenko https://arxiv.org/pdf/1908.06887 ] where a more general problem is stated with a more general solution (using the ranker as the metric during the graph search). This approach seems extremely relevant, but not discussed/compared in the paper.
2) Using a single vector embedding search to approximate a more complex interaction sounds like distillation of a ranker to a retrieval, which is a well-known baseline (e.g. googling gives https://arxiv.org/pdf/2210.11708 or https://arxiv.org/pdf/2403.20327 ). As both ranker and retrieval are trained, it is not clear why not directly train the retrieval model to directly optimize the ranking objective. There might scenarios where distilling is not feasible for some reason, but for a research paper I think it is important to show how much of the performance is lost.
Technical Quality: 3
Clarity: 3
Questions for Authors: I'd like the authors to address both points of weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The approach is limited to multi-vector ranking as the function.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and suggestions about the paper. Below, we address both points of weakness mentioned by the reviewer.
> W1:
We thank the reviewer for the reference to this paper. Using a re-ranker or general similarity metric in the graph Beam Search is an interesting approach, which has not been considered by any of the prior extensive literature on multi-vector models and retrieval that we know of. One important point to note is that scoring in the graph search becomes significantly more expensive when the re-ranking similarity is used (Chamfer for us). In contrast, our method uses heavily quantized vectors in the search (compressed by 32x), so that scoring the dot product between two 5k-dimensional FDE’s can be done in roughly the same time as the dot product between two 5000/32 ~ 150-dimensional vectors of floats, which is significantly faster than computing Chamfer similarity (which requires a (32 x 128) x (128 x ~78) matrix product). Furthermore, our approach is based on provable theoretical bounds, whereas it is not clear that searching directly with Chamfer similarity would work in graph based approaches (due to the non-metric aspects of the similarity).
> W2:
As discussed in the global author response, we would like to emphasize that the focus of our paper is to design an improved standalone retrieval algorithm for multi-vector databases. This is exactly the same goal as the PLAID paper. Thus the reviewer’s concern would apply equally to the (highly-successful) PLAID paper, whose retrieval approach is purely based on the multi-vector representations.
Therefore, like PLAID, we assume we are already given as input a multi-vector database, and we need to find the approximate nearest neighbors under the Chamfer Similarity for the MV embeddings in that dataset. Thus, training different MV models, or comparing with other SV models, such as those distilled from a re-ranker, are both out of the scope of the current paper. We remark that the comparison of MV models versus other methods of retrieval have been extensively evaluated in the prior literature on multi-vector retrieval, and the continued extensive research on multi-vector models and retrieval is testament to the power of multi-vector models.
---
Rebuttal Comment 1.1:
Title: re: rebuttal
Comment: I would like to that the authors for the response and for scope clarification. I'll increase the soundness score
I think the paper scope on adapting the existing multi-vector solutions limits its interest from a broader community, so I am going to keep my rating.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their comment, but disagree with the assessment on limited interest on this domain from a broader community! Importantly, we emphasize the following
(1) The demand for neural information retrieval is stronger than ever. If the number of startups that are forming to offer this service, or the number of cloud service providers that are offering a variety of services that use information retrieval is not convincing enough, one can look at the sheer number of research papers that are being written in this domain. Here is a list of some of the papers that are posted to arxiv since the beginning of 2024 specifically on multi-vector: https://arxiv.org/abs/2407.20750 https://arxiv.org/abs/2405.15028 https://arxiv.org/abs/2404.02805 https://arxiv.org/abs/2404.00684 https://arxiv.org/abs/2402.15059 https://arxiv.org/abs/2403.13291 https://arxiv.org/abs/2402.03216
(2) While a lot of headroom remains on the table in neural information retrieval, there hasn't been a lot of progress in the single vector modeling recently and it is believed that single vector models can’t address the remaining major challenges and multi-vector approaches can potentially move the needle here. These challenges include: information retrieval for ambiguous, multi-intent, and not-a-right-answer queries; information retrieval for complex and nuanced queries that require multiple different pieces of information to be answered.
(3) In GenAI, which is undoubtedly the hottest field of AI at the moment, neural information retrieval is critical and is going to remain important in the foreseeable future in the RAG type systems. In this context, similar to (2) above, single vectors fall short in terms of quality for complex and critical problems such as multiple needles in a haystack problem that is in the center of many generative tasks.
(4) While multi-vector models have already shown to outperform single-vector models for information retrieval, the main reason their use-case has not become standard in all the above mentioned domains is their lack of efficiency compared to single vector models and retrieval systems.
We believe the above provides substantial evidence for the broad appeal of algorithms for multi-vector models | Summary: Multi-vector representation can greatly help retrieval systems work efficiently and accurately, transforming these sets of multiple vectors into a single vector representation that can still encapsulate the information from the multiple vectors, allowing efficient search using traditional single-vector search techniques while preserving the benefits of multi-vector search for rich document representation.
Strengths: Novelty and generalizability: by introducing Fixed Dimensional Encodings (FDEs) in MUVERA, which compress multi-vector data into a single vector. This concept is innovative as it allows for applications of single-vector retrieval techniques, such as MIPS, to multi-vector problems. And I can foresee how this work is generalizable to other tasks and other datasets.
Comprehensive experiment: this experiment fully compared the state-of-the-art techniques with careful experiment settings, as well as good methodology.
Weaknesses: Computational overhead: as introduced by the FDE creation and query processing, the computational overhead may be high as the dataset grows. The MUVERA may have problems in preprocessing and dynamic updates.
The robustness of MUVERA across different types of retrieval tasks and its effectiveness in different domains (e.g., legal documents versus scientific articles) have not been extensively validated, for those domain-specific tasks, it could be beneficial to run experiments on different datasets from diverse domains.
Technical Quality: 4
Clarity: 3
Questions for Authors: What are the main trade-offs in this work? Maybe dimension reduction and accuracy are one of them: it is advantageous for processing speed and memory usage, but it can also lead to information loss.
What are the specific challenges encountered when implementing FDEs in real-world systems, particularly concerning maintaining the balance between efficiency and retrieval accuracy?
Tuning parameter W in this work: "The only tuning knob in our system is W; increasing W increases the number of candidates
298 retrieved by MUVERA, which improves the recall" Can this be pre-determined or you can use a specific algorithm to decide it?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: If the computation overhead is a barrier problem for MUVERA running on a very large dataset, it may face limitations on scalability of the FDE approach, especially in terms of how well it can be adapted to very large datasets or highly dynamic environments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and suggestions. We respond now to the specific questions of the reviewer.
> “Computational overhead: as introduced by the FDE creation and query processing, the computational overhead may be high as the dataset grows. The MUVERA may have problems in preprocessing and dynamic updates.”
We note that the cost of FDE generation is only incurred at index time, which can be done offline. Furthermore, the cost of generating an FDE from the multivector representation is linear in the size of the multivector representation, where the complexity scales linearly with the number of repetitions performed. We note that our latency measurements also include the cost of generating the FDE from the query’s multivector representation, and that the time spent on this step is less than 1% of the overall query time. Also note, with regards to dynamic updates, that the construction of FDE’s are data-oblivious, and therefore are easy to generate and maintain in a dynamic setting.
> “Robustness of MUVERA across different types of retrieval tasks and its effectiveness in different domains (e.g., legal documents versus scientific articles)”
We note that the BEIR retrieval benchmarks that we evaluate on, which are standard in the IR literature, are diverse, and include a mix of scientific documents, web queries, quora QA, and other types of domains. In this paper, we chose to focus on the same (diverse) set of datasets used in prior work on multivector representations.
> “What are the main trade-offs in this work? Maybe dimension reduction and accuracy are one of them: it is advantageous for processing speed and memory usage, but it can also lead to information loss.”
We agree with the reviewer that the relationship between FDE dimension and accuracy is the main trade-off that we optimize in this work. We further leverage compression techniques like product quantization to enable using higher-dimensional FDEs while keeping the index size for the FDEs moderately sized. Another trade-off that we explored is the relationship between the number of query embeddings and the search latency (via ball-carving) —to the best of our knowledge our work is the first to identify this relationship and we believe it may be of independent interest and importance in future work on multivector systems.
> “What are the specific challenges encountered when implementing FDEs in real-world systems, particularly concerning maintaining the balance between efficiency and retrieval accuracy?”
The main challenge is to tune the dimensionality of the FDEs to achieve a desired level of accuracy while bounding the space usage of the index. Thanks to the strong theoretical properties of FDEs, our implementation of FDEs is robust to specific choices in the FDE design space such as the type of partitioning used (as illustrated in Figure 3). After fixing a dimension that provides suitable accuracy, the next major challenge is to tune the quantization level to keep the index size manageable—as we show in the paper FDEs are remarkably robust to quite aggressive quantization, which enables the index size of MUVERA to be comparable to highly-engineered systems such as PLAID.
> “Tuning W”
W is a parameter that the user needs to tune to achieve a suitable level of recall—we note that this is standard practice in IR systems that are based on graph-based approximate nearest neighbor search. Understanding how to automatically set W to achieve a given level of recall seems to be equivalent to obtaining provable guarantees for these ANNS methods which is a major open question.
---
Rebuttal Comment 1.1:
Title: Some additional questions
Comment: Thanks for the clarification.
I have some additional questions for this work:
- first, in terms of FDE generation, due to the fact that I don't have your code access, I'm curious how long it takes to finish FDE generation on your datasets provided.
- Second, the data diversity I mentioned may have another concern, which is skewed data distribution, can this framework somehow overcome this problem theoretically?
- how big is the index in terms of the disk usage on some specific dataset? Do you need to load the index into memory like HNSW indexes? Your index disk usage and memory footprint may be a crucial point of your work.
- On the other hand, building an index on a multi-vector is somehow a fixed solution for specific multi-vector queries, e.g., we have vector columns vc1, vc2, vc3, vc4, and we somehow build them together via FDE, your framework is focused on the combinational query on vc1, vc2, vc3, vc4, right? Can it be used to serve queries on vc1 + vc3?
- Does the W parameter have a relatively monotonic effect on the results? For example, in IVF indexes, the larger the search partition parameter is set (n_probe), the higher the recall rate will be, and the longer the latency will also be expected.
---
Reply to Comment 1.1.1:
Comment: Thanks for your further comments and questions.
> “first, in terms of FDE generation, due to the fact that I don't have your code access, I'm curious how long it takes to finish FDE generation on your datasets provided.”
FDE generation is extremely fast—the average running time to generate an FDE is roughly 0.001s, and is embarrassingly parallelizable.
> “skewed data distribution, can this framework somehow overcome this problem theoretically?”
With regards to skewed data distributions, we would like to emphasize that our theoretical results hold for any input, even worst case inputs, as is standard in theoretical computer science. Thus skewed data would not affect the performance of our algorithm theoretically.
> “how big is the index in terms of the disk usage on some specific dataset? Do you need to load the index into memory like HNSW indexes? Your index disk usage and memory footprint may be a crucial point of your work.”
We provided some details about the index size in our discussion with reviewer Z5yi. The index size for MUVERA is dominated by the space used for the multivector embeddings. As we discussed with reviewer Z5yi, future optimizations for our system could reduce the space usage by applying quantization techniques to the multivector embeddings. We note that the space used for our graph index and FDEs is about 10% of the space of the original multi-vector representations. We would be happy to add exact space usage numbers for the index to the final version of the paper.
Yes, we do load the index into memory as in an HNSW index. We are using the DiskANN algorithm fully in-memory, which we will emphasize in the paper. The DiskANN algorithm is extremely competitive and many vector search companies (e.g., Pinecone) have migrated from HNSW to DiskANN due to its strong performance.
> “building an index on a multi-vector is somehow a fixed solution for specific multi-vector queries, e.g., we have vector columns vc1, vc2, vc3, vc4, and we somehow build them together via FDE, your framework is focused on the combinational query on vc1, vc2, vc3, vc4, right? Can it be used to serve queries on vc1 + vc3?”
We are not fully sure what the reviewer means by this question—if we understand the reviewer’s question correctly, the reviewer is describing a vector database use case where an object has multiple vector columns (v1, .., v4), where the vectors are produced using potentially different single-vector models, which is fundamentally different from the setting that we consider.
Our paper is focused on the multivector setting where we have a single multivector *model* which transforms an object into a set of vectors, which all live in the same latent space. When a query arrives, we run the same model on it to transform it into a set of vectors. The goal is to find the most similar document, where similarity is given by a specific set-set function—the chamfer similarity.
> “Does the W parameter have a relatively monotonic effect on the results? For example, in IVF indexes, the larger the search partition parameter is set (n_probe), the higher the recall rate will be, and the longer the latency will also be expected.
Yes, increasing W monotonically improves the quality of the results—it will result in higher recall at the expense of higher latency as the reviewer correctly pointed out. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their thoughtful comments and suggestions. We reply to the questions and comments of each reviewer individually in the corresponding rebuttal fields. First, we would like to emphasize here several global points which will address common concerns of the reviewers.
Firstly, we stress that the goal of our work is to design a more practically- and theoretically-efficient method for multi-vector retrieval. We do not focus on modeling aspects, or comparing multi-vector with other retrieval methods such as SPLADE or other single-vector methods. Such comparisons have already been considered extensively in the literature. Like PLAID, our method is a standalone multi-vector retriever, is independent of the underlying multi-vector model, and works given any database of multi-vector embeddings. Therefore, while there may be other approaches to IR not based solely on the multi-vector representations, they are out of the scope of this paper.
Secondly, we would like to make clarifying points with respect to the comparison of MUVERA and PLAID. While our goal is to design a state of the art retrieval system for multi-vector search, we see the main contribution of our work as contributing a new approach to multi-vector search which is based on principled theoretical guarantees, as an alternative to the prior heuristic approaches used by other work on multi-vector retrieval. PLAID itself is a highly-engineered system which is built on a sequence of successive optimizations (ColBERT, ColBERTv2, PLAID), and has been carefully hyper-optimized, especially for MS MARCO. In fact, quoting from a recent reproducibility study of PLAID (https://arxiv.org/pdf/2404.14989): “PLAID’s Pareto frontier is a patchwork of parameter settings; changing one parameter without corresponding changes to the others can result in slower retrieval with no change to effectiveness.”
In contrast, our paper is the very first of its kind using the new FDE method, and is significantly simpler to tune and, as a result, is more straightforward to generalize to arbitrary models and data sets without significant quality regression. We used the *same parameter settings* for all six of the datasets tested, and on five out of the six we outperformed PLAID. On MS MARCO, which PLAID was highly tuned for, we nearly matched its performance, which we find to be a strong contribution given that our technique has not benefited from the same extensive engineering as PLAID.
Finally, we emphasize that in our offline experiments, we show that as a method for retrieval FDE’s are 2x-5x more efficient than the SV heuristic (Figure 5), meaning that they need to retrieve many fewer candidates to achieve the same recall. Since PLAID uses (an optimized version of) the SV heuristic, we believe that this signals that there is significant headroom for optimizing the approach of using FDE’s for multi-vector retrieval. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper presents a method of converting multi-vector query-document retrieval problem to a single query-document vector-based retrieval problem. The basic idea is to project the multi-vector queries and documents into fixed-dimensional embedding through random projections. Further efficiency in storage of the random projections is achieved through vector quantization (product quantization) methods. The method is shown to result in improved recall as well as improvements in time performance. The projection into the fixed-dimensional space is obtained by latent semantic hashing (LSH). This concept could have been explained in a much simpler manner than presented in the paper.
Strengths: The formulation of the problem and the theoretical proofs are the strength of the work. The experimentation has been done on the obligatory BEIR benchmark datasets. Comparison has been made to one other approach (PLAID).
Weaknesses: It seems to me in the current age of LLMs, this technique appears to be a bit dated. There wasn't any mention of SpladeV2 and its variants either that are actively being used in commercial vector databases along with vector quantization in billion-vector size databases. Addressing this would be good for related work. A 0.6 sec time performance for a single search is too slow for commercial uses, since typically multiple searches are rolled into an overall one.
It is difficult to reproduce this work from the level of detail provided. A block diagram illustrating the overall steps would be useful.
There are typos in the draft so a good spell check is needed.
Technical Quality: 2
Clarity: 2
Questions for Authors: The fact that LSH preserves nearness is known. How does adding product quantization alter this ability? Does this technique allow for vector search using the compressed representation or the vectors need to be decompressed back before similarity search?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The limitations of the algorithm have not been adequately addressed. For example, for what sized datasets is this technique suitable or will be an overkill?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and suggestions. We respond now to the specific questions of the reviewer.
> “It seems to me in the current age of LLMs, this technique appears to be a bit dated. There wasn't any mention of SpladeV2 and its variants either that are actively being used in commercial vector databases along with vector quantization in billion-vector size databases.”
We would first like to emphasize that multi-vector models are a fairly new technique which have attracted considerable attention in the last few years, especially in the last 2 years, with many papers being published on the topic at each major conference (see the large list of references in the paper for a subset of these works). Thus, it does not seem to be the case that multi-vector models are outdated in the current LLM era.
Furthermore, while LLMs have shown amazing results in generative tasks, they are not really strong for information retrieval tasks out of the box and, for example, using standard techniques such as special token pooling or average pooling on top of LLMs for information retrieval from large corpora of text doesn’t match up with even older and simpler semantic encoder approaches. As a result, a contrastive loss based tuning is applied to embeddings pooled from LLMs to make them useful for information retrieval (e.g. Sentence T5, Gecko). Multi-Vector training over LLMs is an even more recent and active field of research that has been challenging researchers in academia and industry both on the modeling and on the efficiency worlds (e.g. XTR, ALIGNER, ColBERT, ColBERTer, ColBERTv2).
Finally, we emphasize that the goal of our work (like PLAID) is to design a more practically- and theoretically-efficient method for multi-vector retrieval. We do not focus on modeling aspects, or comparing multi-vector with other retrieval methods such as SPLADE or other single-vector methods. Such comparisons have already been considered extensively in the literature.
> “A 0.6 sec time performance for a single search is too slow for commercial uses, since typically multiple searches are rolled into an overall one.”
We would like to emphasize that these latency experiments were run on a single core, as is standard practice in the line of IR papers our work contributes to (ColBERT, ColBERTv2, PLAID). In commercial settings, queries are typically run using more cores, and the queries are distributed across multiple machines resulting in significantly faster searches. We focused on single core evaluation as this is the standard methodology for measuring latency in the information retrieval literature (e.g., PLAID and other IR papers all use single core measurements when reporting latency).
> “A block diagram illustrating the overall steps would be useful.”
We include block diagrams of the overall steps of our algorithm in Figure 1; if the reviewer has additional suggestions on how to improve our figure we would be happy to incorporate them.
> “The fact that LSH preserves nearness is known. How does adding product quantization alter this ability?”
Firstly, we would like to point out that the product quantization is being done on top of the FDEs, not on the original vectors before they are added to the FDEs. Since LSH is only used to decide how to add initial vectors to the FDEs, product quantization does not interact with the LSH at all in this way. Nevertheless, for completeness we remark that even if we did do PQ first before applying LSH, the theoretical and practical guarantees of LSH would still hold. This is because PQ approximates the original vector by a very nearby vector (which is represented in compressed form), therefore preserving nearness to the original vector. So even if PQ was applied before LSH, the guarantees of LSH would still apply.
> “Does this technique allow for vector search using the compressed representation or the vectors need to be decompressed back before similarity search?”
Yes, the vector search and scoring is done using the compressed representations. This is one of the main benefits of product quantization – you can score similarity even faster than a fixed dot product using asymmetric scoring – this is a standard technique in similarity search.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks for those clarifications. So could you comment on how your method compares to Splade V2?
Also, what would be the time performance in realistic settings? What is the largest vector database you have tested this against?
Finally, it would be good to compare the results of retrieval with and without PQ and LSH components to see how much loss in accuracy.
So although the response addressed my questions somewhat, the answers I am looking for are still not there as indicated above. | Summary: This paper aims to improve the search efficiency of multi-vector retrieval models, such as ColBERT. Specifically, the authors propose MUVERA framework, which reduces the multi-vector (MV) similarity of a query/document pair to the single-vector (SV) similarity by constructing Fixed Dimensional Encoding (FDE) of MV representation. Such reduction allows re-using many highly-optimized MIPS solvers for multi-vector retrieval. The authors also provide theoretical guarantee on the approximation error of FDEs. Empirically, compared to previous SOTA heuristic for multi-vector retrieval, MUBERA can achieve a similar Recall while significantly reduce the latency.
Strengths: - The proposal of Fixed Dimensional Encodings (FDEs) such that its inner product approximate the multi-vector (MV) similarity is novel and interesting
- The authors also provide strong theoretical guarantee under $\epsilon$-approximation to the true MV similarity
- Promising empirical results that significantly reduce the latency on most BEIR datasets compared to PLAID
Weaknesses: Here are the summarized version. See detailed comments/questions in the next Section.
- The index size of MUVERA framework can be quite large
- MUVERA has sub-optimal performance on the MS-Marco dataset
- no released code
Technical Quality: 3
Clarity: 3
Questions for Authors: ### Q1: The index size of MUVERA
As illustrated in Figure 1, MUVERA consider a two-stage search procedure: (1) first using MIPS to obtain a candidate set of documents; and (2) re-rank the candidates with exact Chamfer similarity.
- (a) For the first step, using PQ codes on the FDE of documents and build ANN index indeed save the memory. For the second step, however, computing the exact Chamfer similarity still require storing all the original token embeddings per document?
- (b) What's the memory usage and index size for stage 1 and stage 2, respectively?
### Q2: Why do MUVERA has sub-optimal performance, compared to PLAID, on the MS-Marco dataset?
- (a) Is it because of using data-agnostic partitioning functions (SimHash), making the approximation error of FDE larger?
- (b) If the token embedding distributions are skew and concentrating on certain sub-space (maybe this is the case in MS-Marco?) , is it still a good idea to use SimHash in FDEs? Or k-means would do a better work?
### Other minor comments
- (a) In Figure 1 caption, there is a typo: "comapred" => "compared"
- (b) At Line 133: "vectors that are closer are more are more likely to..." => there seems to be some repeatedly wording
- (c) In Figure 2, the legend on the left-hand-side seems wrong? The orange square should be Doc Embeddings and the blue circle should be Query Embedding, according to the right-hand-side plot?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: To my knowledge, there's no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and suggestions. We respond now to the specific questions of the reviewer.
> Q1(a):
We first clarify questions about the index size of MUVERA. The reviewer is correct that we need to store both (some representation of) the FDE’s and original multi-vector (MV) representations, however we do not need to fully store either of them. Specifically, we can use the same product quantization for the original MV representations that we use for the FDE’s. Since the focus of the paper was on latency (and not necessarily space) and the new FDE method, we did not run our end-to-end evaluations with the compressed MV representations when rescoring. However, doing so would (1) speed up rescoring (since asymmetric PQ scoring is up to an order of magnitude faster than scoring the uncompressed vectors) and (2) result in negligible quality loss. For concrete numbers on (2), we found that using PQ256-2 (i.e. 8x compression) on the original 128-d vectors resulting in *no downstream quality loss* in the recall for Chamfer scoring (i.e. uncompressed R@100 = 91.49 vs compressed = 91.63, or 98.44 uncompressed vs 98.47 compressed for R@1k). If we compress using PQ256-4 (16x compression), we see very negligible quality loss (the recall@N is 91.1 and 98.38 for N=100,1000 respectively). Also note that PLAID and ColBERTv2 also store compressed versions of the original MV embeddings (via centroids + small residual), which could easily be used for rescoring in our case as well.
> Q1(b):
for concrete numbers, when using 10k dimensional FDE’s on MS MARCO (which originally has an average of 78.8*128 = 10086 floats per MV representation), we used PQ256-8 (as reported in the paper) to compress the vectors by 32x, and the full index size is 11 GB (instead of 340GB to store the uncompressed FDE’s). For the original multi-vector representations, the uncompressed index size is about 340 GB on disk. Using PQ256-4 we reduce it to 42GB, and using PQ256-8 it goes down to 21 GB (with negligible quality loss as described above). Thus the total index size for stages (1)+(2) would be 11 + 21 = 32 GB for MS Marco, which is more than a 10x compression from the original vectors, and comparable to PLAID’s index (21.6GB).
> Q2:
We first want to emphasize (1) that we outperform (sometimes significantly) PLAID on all 5 other datasets we studied, in terms of both latency (and sometimes recall too), and (2) PLAID only marginally outperforms our method on MS MARCO. As discussed in the paper, PLAID was highly optimized for MS MARCO due to the prevalence of that dataset, and its parameters were likely overfit. Quoting from a recent reproducibility study of PLAID (https://arxiv.org/pdf/2404.14989): “PLAID’s Pareto frontier is a patchwork of parameter settings; changing one parameter without corresponding changes to the others can result in slower retrieval with no change to effectiveness.” In contrast, using one single set of parameters for all datasets, we get strong results across the board with MUVERA. We also want to emphasize that PLAID was the third in a series of papers (ColBERT, ColBERTv2, PLAID) which attempted to highly optimize the SV Heuristic. MUVERA, on the other hand, is the first paper on the FDE-based approach for MV retrieval, and thus has not seen the same level of optimization. We believe the fact that we match (and often exceed) PLAID on many datasets with this new approach without heavy systems optimization is a strong benefit of our work.
> Q2(a) + (b):
With regards to Simhash vs. K-means: As shown in Figure 3 (the grid-search), SimHash outperforms k-means on MS MARCO for every parameter range on the pareto frontier. The reason for this is as follows: because we need the FDE’s to be small dimensional, we cannot set the number of partitions B to be too large, so we often use B = {16,32,64}. For such small values of B, you cannot hope to capture the global behavior of a dataset of 8.8M * 77.8 ~ 684M vectors in MS MARCO. Thus, using only <100 centers from k-means on nearly a billion vectors will give very little information about any individual datapoint’s MV representation (note we cannot run a different k-means on each document’s MV representation, since the partition needs to be the same for all points!). Thus, data dependent techniques will not help much here. Instead, SimHash likely performs better because it provably is an LSH and gives good approximate partitions (unlike k-means, which has no theoretical guarantees) for *all* points in the dataset. With regards to the question about skew in the dataset – the data would have to be incredibly skewed (i.e. tightly concentrated around 100 clusters) for k-means to work well. Additionally, each MV representation would have to be spread out over the different clusters (otherwise the partition would not split it into smaller parts).
> “(c) In Figure 2, the legend…’
This is a very good catch! We thank the reviewer for finding this mistake in the diagram, as well as the other typos mentioned. We will fix them right away.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification about the index size. I will encourage the author also supplement the index size of MUVERA (stage1 and 2 respectively) together with the Recall and latency results, in the main experiment sections. The index size is also a crucial factor to be considered when deploying the ANN search system for online retrieval. Thus, I will like to keep my score as is. | null | null | null | null |
Mixture of Link Predictors on Graphs | Accept (poster) | Summary: This paper proposes an ensemble model of different link prediction methods. The authors find that different node pairs on the graph can form a link due to different pairwise representations, and there is no single link prediction model that can capture all of them. Then the author proposes to combine the outputs of different link prediction models into one ensemble model, weighted by the heuristics information of the node pair. Experiments on graph benchmarks show the improvement brought by ensembling.
Strengths: 1. The paper is clear and well-written.
2. The motivation of the study is clearly stated. In the preliminary study section, the authors give solid examples of why individual link prediction models can fail, which motivates a combination of such models.
3. The experiment is convincing and comprehensive, showing the competitive performance improvement with a mixture of link prediction models.
Weaknesses: 1. The novelty of the paper is limited. This study proposes a mixture of existing link prediction models, which brings minor research values to the community. The overall contribution of the paper makes it more like a technical report, rather than a research paper. While this paper has its own value, it is more suitable to be submitted to other venues.
2. The baselines considered in the experiment are not comprehensive. For example, [1], which is an ensemble of link prediction models, is not evaluated as a baseline.
3. The overall computational efficiency of the mixture method is concerning. On each graph, a set of link prediction models needs to be trained first, which typically involves an extensive process of hyperparameter tuning like in NCN [2]. This step can be too costly for most real-world use cases. The idea of MoE, especially conditional computation, is not leveraged at all to reduce the computational cost.
4. The authors can further benchmark the Link-MoE on [3], which introduces more challenging link prediction tasks.
[1] Ghasemian, A., Hosseinmardi, H., Galstyan, A., Airoldi, E. M., & Clauset, A. (2020). Stacking models for nearly optimal link prediction in complex networks. Proceedings of the National Academy of Sciences.
[2] Wang, X., Yang, H., & Zhang, M. (2023). Neural common neighbor with completion for link prediction.
[3] Li, J., Shomer, H., Mao, H., Zeng, S., Ma, Y., Shah, N., ... & Yin, D. (2024). Evaluating graph neural networks for link prediction: Current pitfalls and new benchmarking. Advances in Neural Information Processing Systems.
Technical Quality: 2
Clarity: 4
Questions for Authors: 1. For the gating model, only the heuristics information is considered as the input. However, this can limit the expressiveness of gating models, because the inherent limitation of the heuristics models themselves. Do authors consider involving the expert models as inputs to the gating models?
2. Following up on the question above, is there any ablation study on training the gating model and expert models jointly? It may introduce higher memory consumption, but one can consider remove the computational heavy models like SEAL or NBFNet.
Confidence: 5
Soundness: 2
Presentation: 4
Contribution: 2
Limitations: See the weakness and question parts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer SDXC,
We appreciate your constructive feedback. We are pleased to provide detailed responses to address your concerns.
**W1:The novelty of the paper is limited.**
**A1:** The novelty of Link-MoE: Our Link-MoE model stands out due to its innovative approach, leveraging the complementary strengths of different GNN4LP models, resulting in substantial performance gains. The core of our model's success lies in the following findings and innovations:
*1. GNN4LP Models are Complementary:* Our preliminary studies show that the overlapping between different GNN4LP models is notably low, shown in Figure 3, 9 and 10 in our paper, indicating these models offer complementary insights for link prediction. Different node pairs may be best addressed by different models. The critical challenge, then, is determining the most effective way to assign node pairs to the appropriate GNN4LP models.
*2. Heuristic-Informed Gating Mechanism:* Our preliminary studies revealed that different GNN4LP models perform differently across different heuristic groups. By designing a gating model that intelligently applies these heuristics, we facilitate the optimal assignment of node pairs to their most suitable predictors, which is a significant departure from traditional MoE applications, which use the same input features for the gating and experts. We compare the Link-MoE with traditional gating, which only leverages the node features in gating. The results are shown in Table 1 (Traditional MoE) in the global response. Traditional MoE only results in comparable performance to the best single experts, while our approach yields superior performance. This phenomenon demonstrates the effectiveness and rationality of the designed gating model.
**W2:The baselines considered in the experiment are not comprehensive.**
**A2:** Thank you for pointing out the related papers. Despite the Mean-Ensemble and Global-Ensemble method in our paper, we test two more ensemble methods. [1] and [2] utilized a random forest and MLP to ensemble various link predictors, respectively. The results are presented in Table 2 in the global response. Link-MoE outperforms all ensemble methods. The superior performance of our approach is attributed to the dynamic nature of our gating module, which assigns customized weights to each expert for every node pair.
[1] Stacking Models for Nearly Optimal Link Prediction in Complex Networks, PNAS'20
[2] An Ensemble Model for Link Prediction based on Graph Embedding, Decision Support Systems'22
**W3:The overall computational efficiency of the mixture method is concerning. The idea of MoE, especially conditional computation, is not leveraged at all to reduce the computational cost.**
**A3:** We would like to highlight that our framework differs from traditional MoE methods. Our goal is to leverage different GNN4LP models to address various node pairs effectively. We utilize a two-step training strategy to train the experts and a lightweight gating model separately. While training multiple experts does increase computational cost, our approach only needs a few experts to achieve superior performance compared to the baselines.
To demonstrate this, we conducted experiments on ogbl-collab and Pubmed datasets using only 3 or 4 experts. The detailed settings and results are shown in Table 1 (3 Experts \& 4 Experts) in the global response.
We observe that using only 3 or 4 experts can achieve comparable performance to using all experts, indicating the computational cost of Link-MoE is not prohibitively high.
**W4:The authors can further benchmark the Link-MoE on [3].**
**A4:** We conduct experiments on ogbl-collab and PubMed under HeaRT setting [3]. The results shown in Table 4 of global response demonstrate that our model significantly outperforms all individual experts on both datasets. This highlights the effectiveness of our approach in leveraging the strengths of multiple experts.
**Q1:Do authors consider involving the expert models as inputs to the gating models?**
**A5:** We use the heuristic as input because the GNN4LP models are aligned with the heuristic methods, as investigated in Section 3.2 in our paper. To investigate the impact of involving expert model predictions as input to the gating model, we conducted experiments on the ogbl-collab and Pubmed datasets by concatenating the prediction results of experts with the heuristic features. The results are shown in Table 1 (With Experts as Input) of global response.
We observed that involving the experts' prediction results as additional input did not lead to improvement. In fact, this approach may result in lower performance compared to using heuristics alone. This phenomenon suggests that the outputs of the expert models may not effectively reflect their importance to specific node pairs, highlighting the effectiveness and rationality of using heuristics as the gating input.
**Q2:Is there any ablation study on training the gating and expert models jointly?**
**A6:** We explored an end2end training strategy that trains several experts alongside the gating module from scratch. We conducted experiments on Cora and ogbl-collab to evaluate this approach. The results and experimental settings are detailed in Table 3 of global response.
The end2end results were significantly worse than Link-MoE. A potential reason is that different experts have varying convergence rates, leading to the collapse problem. Figure 1(a) of global response illustrates some strong experts, such as NCN and NCNC, converge faster and dominate the performance. Strong models are assigned higher gating weights while MLP and GCN are assigned zero weights, as shown in Figure 1(b) of global response, which limits its ability to leverage additional information from MLP and GCN. While our end2end approach did not yield improved results, it remains an interesting and challenging idea to explore the conditional computation of MoE for efficient end2end training.
---
Rebuttal Comment 1.1:
Comment: Thanks for the extensive efforts in addressing my concerns. However, I still have several major concerns that prevent me from recommending acceptance of the work. As previously discussed in W1 and W3, I am still concerned about the contribution and motivation of Link-MoE.
1. Link-MoE achieves good performance by ensembling a set of SOTA GNN4LP models. However, this makes the contribution limited because ensembling may be the most well-known method to improve performance. In general, it is not a "surprising" thing to see performance gain by bundling SOTA methods together.
2. The MoE naming is a bit misleading. The author introduces the "MoE" techniques in Link-MoE. However, Link-MoE does not either (1) reduce the parameter size of the entire model; or (2) achieve better efficiency by routing the prediction to different domain experts.
3. Given that LP is a practical problem, scalability is one of the most crucial aspects to consider when choosing LP methods. However, Link-MoE is computationally expensive in both the training and inference stages. During training, the expert model needs to be trained individually with proper hyperparameter tuning. (This is acknowledged in the Limitation section of the paper). During the inference, Link-MoE will require the link representation from all the expert models. This paradigm makes Link-MoE almost impossible to be deployed in industrial applications.
If possible, I want to know more about the authors’ comments on the practical applicability of Link-MoE to real-world scenarios.
4. As far as I can see from the paper, the only aspect (research value vs novelty vs insights vs practical usage) that Link-MoE shines on is its superior performance improvement on various benchmarks. However, as [3] points out, HeaRT is a more realistic and personalized evaluation benchmark for LP methods. Since evaluating on HeaRT only requires a small change in the model inference stage (change the random negative edges to hard negative edges in HeaRT) and no need to retrain anything, it is expected that Link-MoE can be evaluated on all datasets in HeaRT, especially those OGB datasets. However, the results are missing from the rebuttal or main experiment. When I revisited HeaRT paper, Observation 1 in Section 4.3 claims that simpler models actually perform better than complex ones. This seems to contradict what is observed in Link-MoE.
In general, I find the paper well-written and the preliminary study part is interesting. However, the major concerns above outweigh the paper's strength. Therefore, I keep my original rating.
---
Reply to Comment 1.1.1:
Title: Further response to Reviewer SDXC - Part 1
Comment: Dear Reviewer SDXC,
Thank you for your prompt reply, allowing us the opportunity to further clarify your concerns.
## 1. Difference from Traditional Ensembling Methods
Although Link-MoE leverages different link prediction models, it is not a simple ensembling method. Meanwhile, we respectively disagree "it is not a "surprising" thing to see performance gain by bundling SOTA methods together.". In fact, traditional ensembling methods often do not outperform the strongest individual base model. To illustrate this, we tested four ensembling methods, including Mean-Ensemble and Global-Ensemble in our paper, as well as two new ensembling methods [1][2]. For all these methods, we used the same base models to ensure a fair comparison. The results are shown below:
| | ogbl-collab | Pubmed |
|---|---|---|
| Best Expert | 66.13 | 44.73 |
| Mean-Ensemble | 66.82 | 38.54 |
| Global-Ensemble | 67.08 | 37.63 |
| Ensemble [1] | 69.65 | 41.21 |
| Ensemble [2] | 65.11 | 43.92 |
| Traditional gating | 66.59 | 42.15 |
| Link-MoE | 71.32 | 53.10 |
The results indicate that these traditional ensembling methods typically have comparable or even worse performance compared to the best individual expert. This underscore the key contribution to Link-MoE’s success is its ability to assign different node pairs to different experts based on our gating mechanism. The design of this gating mechanism is informed by our preliminary studies, which revealed that different experts excel at different heuristics.
Additionally, we tested the traditional gating in MoE, which uses the same input as the experts. The results show that the performance of 'Traditional Gating' is even worse than the best expert. This highlights that the strength and contribution of Link-MoE lies not in simple ensembling, but in its intelligent and heuristic-informed allocation of node pairs to the most suitable experts.
## 2. The name of MoE
The term "Mixture of Experts" (MoE) is a broad concept that can be traced back to foundational works such as [3][4]. These MoE models follow the divide-and-conquer principle, where the problem space is divided into subspaces, each of which is potentially easier to solve with specialized experts. Many papers [3,4,5,6] have leveraged the MoE framework to improve model performance, as outlined in the survey [7]. In recent years, MoE has gained popularity again due to the introduction of sparse gating mechanisms, which allows for increasing the model parameters while maintaining efficiency. However, the essence of MoE is not limited to parameter efficiency or reduction. The core idea is the strategic use of specialized experts to handle different aspects of the problem. Our Link-MoE model adheres to this divide-and-conquer principle by routing different node pairs to different experts based on the characteristics of the node pairs. Therefore, we believe that Link-MoE can be considered as a MoE model.
---
Reply to Comment 1.1.2:
Title: Further response to Reviewer SDXC - Part 2
Comment: ## 3. The Practical Applicability of Link-MoE.
Some of our authors have ever worked in industry and we have collaborated with industry on various link prediction problems. Based on our experiences, we are glad to discuss the potential practical applicability of Link-MOE from the following two perspectives:
First, in many practical link prediction applications, both the training and inference are offline. For example, in friend recommendations in social media, the potential list of friends for a particular user is often pre-computed. Therefore in these applications, Link-MOE is potentially to be applicable.
Second, there are also many link prediction applications where we indeed need to do online inference such as these session-based recommendations. For these applications, we would like to highlight that Link-MOE is also potentially to be applicable with the following two reasons. (1) These applications often adopt a two-stage strategy to ensure the efficiency. They will first use a simple method to recall a small subset of items, say $L$. Then they will leverage link prediction algorithm to only score these $L$ items. $L$ is much smaller than the whole set of items which is often hundreds or thousands. That is why sophisticated methods can be applied in these applications. (2) We acknowledge that during training, Link-MoE requires the training of several experts, which can be computationally intensive. However, the training phase is often conducted offline, meaning that the computational cost during training is less of an immediate concern.
**Regarding the inference, our Link-MoE can also leverage the Sparse Gating to improve the efficiency.** During inference, the gating model first determines the importance of each expert, allowing us to calculate predictions using only the most relevant experts. This selective approach significantly reduces the computational load during inference. Here, we demonstrate the results of Top-2 and Top-3 gating, which only use 2 or 3 experts for each sample during inference.
| | ogbl-collab | Pubmed |
|---|---|---|
| Best Expert | 66.13 | 44.73 |
| Top-2 Gating | 71.22 | 50.03 |
| Top-3 Gating | 71.94 | 51.13 |
| All Experts | 71.32 | 53.15 |
From the results, using only 2 or 3 experts can achieve performance comparable to using all experts. This demonstrates the potential of using Link-MoE for online inference in real-world applications.
## 4. The HeaRT setting.
We would like to clarify that the statement "Observation 1 in Section 4.3 claims that simpler models actually perform better than complex ones" is not accurate. Observation 1 in the HeaRT paper actually claims that the performance gap between simpler models and GNN4LP models is reduced under the HeaRT setting. However, GNN4LP models still outperform simpler models, as shown in Table 5 of the latest version of the HeaRT paper[8] (v3 version on arXiv).
We have conducted additional experiments under the HeaRT setting with all OGB datasets, and the hit@20 results are shown below. Our findings indicate that Link-MoE still outperforms the best baseline models by a significant margin, even in this more challenging evaluation setting.
| | ogbl-collab | ogbl-ddi | ogbl-ppa | ogbl-citation2 |
|---|---|---|---|---|
| Best Expert | 23.35 ± 0.73 | 67.19 ± 1.18 | 82.24 ± 0.40 | 53.76 ± 0.20 |
| Link-MoE | 39.58 ± 0.10 | 68.73 ± 0.43 | 88.49 ± 0.56 | 58.04 ± 0.47 |
[1] Ghasemian, Amir, et al. Stacking Models for Nearly Optimal Link Prediction in Complex Networks, PNAS'20
[2] Chen, Yen-Liang, Chen-Hsin Hsiao, and Chia-Chi Wu. "An ensemble model for link prediction based on graph embedding." Decision Support Systems 157 (2022): 113753.
[3] Jacobs, Robert A., et al. "Adaptive mixtures of local experts." Neural computation 3.1 (1991): 79-87.
[4] Jordan, Michael I., and Robert A. Jacobs. "Hierarchical mixtures of experts and the EM algorithm." Neural computation 6.2 (1994): 181-214.
[5] Shahbaba, Babak, and Radford Neal. "Nonlinear models using Dirichlet process mixtures." Journal of Machine Learning Research 10.8 (2009).
[6] Eigen, David, Marc'Aurelio Ranzato, and Ilya Sutskever. "Learning factored representations in a deep mixture of experts." arXiv preprint arXiv:1312.4314 (2013).
[7] Masoudnia, Saeed, and Reza Ebrahimpour. "Mixture of experts: a literature survey." Artificial Intelligence Review 42 (2014): 275-293.
[8] Li, Juanhui, et al. "Evaluating graph neural networks for link prediction: Current pitfalls and new benchmarking." Advances in Neural Information Processing Systems 36 (2024).
We hope that we have addressed the concerns in your comments, and please kindly let us know if there is any further concern, and we are happy to clarify.
Best regards,
All authors
---
Reply to Comment 1.1.3:
Title: A friendly reminder
Comment: Dear Reviewer SDXC,
We appreciate your thorough review and have provided a detailed response to your concerns. Since the end of the discussion period is approaching, we kindly request your feedback on our response. Your insights are crucial, and we want to ensure that any remaining issues are addressed before the discussion period concludes. Please let us know if there are any further points that need clarification.
Thank you for your time and consideration.
Best regards,
All authors | Summary: The paper proposes a mixture of experts model, Link-MoE, for link prediction on graphs. The motivation behind the proposed approach is that different node pairs within the same dataset necessitate varied pairwise information for accurate prediction, while existing methods consider the same pairwise information uniformly across all pairs. To address this, Link-MoE consists of multiple expert models (existing link prediction methods) and a gating model whose goal is to produce importance scores to weight the different expert models according to their contribution towards the final prediction. Experiments on several real-world datasets show that the proposed Link-MoE outperforms existing competing methods, validating the rationale behind its design.
Strengths: S1) The source code is provided with the submission
S2) The paper is overall well-written
S3) The proposed approach is compared against several competing methods and baselines.
S3) Strong experimental results: the proposed method outperforms all the considered competing methods.
S4) The proposed approach is well motivated
S5) The proposed method was assessed on standard benchmark datasets for link prediction
Weaknesses: W1) The figures are hardly readable sometimes (e.g., Fig. 8), the font size (in the legend) should be increased.
W2) Notation problem: in Eq. 1 the gating function G has only one parameter while in Eq. 2 G takes both x_ij and s_ij. Please uniform it as this may be a little confusing for the reader.
W3) The difference between the proposed Link-MoE and the considered baseline Global-Ensemble should be further discussed in the main paper given the strong similarity between the two approaches. As a suggestion, I recommend the authors to move lines 572-575 to the main paper since it describes the main differences between the two methods, i.e. the importance weight vector is uniform in Global-Ensemble while it is adaptable to the pair in Link-MoE.
W4) To evaluate the effectiveness of the proposed method different metrics are reported for different datasets.
Minor:
- Eq. 3: final extra parenthesis
- Line 220: “consits” -> “consists”
- Line 234: extra ”the” in “represent the all the”
Technical Quality: 3
Clarity: 3
Questions for Authors: Q1) Why reporting different metrics for different datasets? As an example, line 155, the authors state that they report MRR for Citeseer and Hits@50 for ogbl-collab. Similarly, in Table 2, different metrics are reported for different datasets. have seen this approach used in other papers on link prediction, but I am not convinced that it is the correct way to proceed because, as shown in the tables in the appendix (e.g., Tables 5-7), the proposed method is not always the best when the evaluation metrics are changed.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 758a,
Thank you so much for your support and recognition of our framework. We are pleased to provide detailed responses to address your concerns.
**W1: The figures are hardly readable sometimes (e.g., Fig. 8), the font size (in the legend) should be increased.**
**A1:** Thank you for pointing out this issue. We will enlarge the font size of the axis labels, legend, and axis ticks for all figures to ensure they are more readable.
**W2: Notation problem: in Eq. 1 the gating function G has only one parameter while in Eq. 2 G takes both $x_{ij}$ and $s_{ij}$. Please uniform it as this may be a little confusing for the reader.**
**A2:** In Eq. 1, we initially used $h_{ij}$ to represent all the heuristics, including both the pairwise node features $x_{ij}$ and the structural heuristics $s_{ij}$.
To alleviate potential confusion, we will rewrite Eq. 1 to match Eq. 2 as follows:
$$
Y\_{ij} = \sigma \left( \sum\_{o=1}^m G(\mathbf{x}\_{ij}, \mathbf{s}\_{ij})\_o E\_o(\mathbf{A}, \mathbf{X})\_{ij} \right)
$$
**W3: The difference between the proposed Link-MoE and the considered baseline Global-Ensemble should be further discussed in the main paper given the strong similarity between the two approaches.**
**A3:** We will further discuss the Global-Ensemble method in the main text. Specifically, we will include a discussion of this method at the end of Section 3. We will first include the definition of the Global-Ensemble method (given by lines 570-575 in the appendix). We will then discuss the weaknesses of this method, mainly that the weights are non-adaptive and are the same for all links. Our empirical results in Section 3.1 and 3.2 also show that this kind of design is unable to model all links. The deficiency of this formulation serves as a strong motivation for our proposed method Link-MoE in Section 4.
**W4: To evaluate the effectiveness of the proposed method different metrics are reported for different datasets.**
**A4:** In Table 1, we use MRR for evaluating Cora, Citeseer, and Pubmed datasets, while Hits@50, Hits@100, and MRR are used for the ogbl-collab, ogbl-ppa, and ogbl-citation2 datasets. This approach is consistent with prior research, where notable methods [13, 14, 15, 16, 27, 53] have been designed for link prediction tasks. Additionally, we provide the results of other metrics in the Appendix, specifically in Tables 5, 6, 7, and 8.
**Q1: Why reporting different metrics for different datasets?**
**A5:** For the training of Link-MoE, we adhere to the experimental settings outlined in prior research [53], which serves as a benchmark for the link prediction task.
During the training, we select and save the best model based on the validation performance. However, there are multiple evaluation metrics, which might be not aligned well. In this paper, we select the best model based on MRR for the Cora, Citeseer, and Pubmed datasets. For the ogbl-collab, ogbl-ppa, and ogbl-citation2 datasets, we used Hits@50, Hits@100, and MRR as selection metrics following [53].
As a result, our method might not perform optimally on all individual metrics due to the misalignment of different metrics. Nevertheless, we consistently rank within the top 3 across almost all datasets and metrics. These observations indicate that Link-MoE is capable of delivering strong performance across various metrics.
In the revision, we plan to fix all the typos and add another experiment that selects the best results based on each specific metric to provide a more comprehensive evaluation.
We hope that we have addressed the concerns in your comments, and please kindly let us know if there is any further concern, and we are happy to clarify.
---
Rebuttal Comment 1.1:
Title: Follow-up discussion
Comment: I appreciate the detailed responses from the authors (as well as those to other reviewers), which have made me more enthusiastic about this work. I have two follow-up questions:
- I am curious about how you computed the Jaccard coefficient to calculate the overlap ratio between each pair of heuristics (Figures 2 and 3). Did you use a threshold to decide whether a prediction for an edge is ‘present’ or ‘missing’? Otherwise, I cannot see how you counted the number of edges that were correctly predicted or not by a pair of methods.
- Another point of curiosity: In line 129, you state that you assess the combination of a pair of heuristics by simply adding their original values. Does this mean that you did not normalize the scores before adding them? Since some heuristics may have different ranges, this approach might not be appropriate.
---
Reply to Comment 1.1.1:
Title: Response to the follow-up questions
Comment: Dear Reviewer 758a,
Thank you for your thoughtful responses and your enthusiasm about our work. We are glad to answer your follow-up questions.
**1. Computation of Jaccard Coefficient**
For the calculation of the Jaccard coefficient, we use the Hits@K metric for each edge. Specifically, we choose Hits@3 for small datasets and Hits@20 for the OGB datasets. We first rank the prediction scores of each method for both positive and negative edges. If the prediction score of a positive edge is in the top-K, we label this positive edge as 'present' and add it to the correct prediction set. In this way, we can calculate the Jaccard coefficient by comparing the correct prediction sets for each pair of methods.
Besides, your suggestion that using a threshold is also a feasible way.
**2. Combination of Heuristics**
Regarding the combination of heuristic scores, we did perform the normalization as you suggested. Specifically, we will normalize each Heuristic (H) to the range of [0, 1] using $\frac{H-Min\\_H}{Max\\_H - Min\\_H}$, where $Max\\_H$ and $Min\\_H$ are the maximum and minimum heuristic values in the dataset. One exception is the Shortest Path (SP), where a smaller SP indicates a higher likelihood that two nodes are connected. Therefore, we first calculate $\frac{1}{SP}$, and then normalize $\frac{1}{SP}$ in the same way as other Heuristics.
We will add these details in our revision. Thank you once again for your thoughtful feedback and for helping us improve our work. If you have any further comments or questions, please let us know.
Best regards,
All Authors | Summary: Link prediction in graphs is a fundamental task in graph machine learning and multiple heuristics and ML algorithms have been designed in research to leverage the pairwise and structural information to predict links between nodes. This work takes inspiration from the success of MoE models across various verticals and proposes a new method that outperforms the SoTA baselines on this task on several standard datasets. The authors demonstrate that heuristics and algorithms for LP are very diverse in their capabilities to predict links depending on the node structure and pairwise features (i.e. when one would perform well, the other might not). Due to this little overlap in their abilities the authors propose a dynamically weighted ensemble (MoE) like approach - such that a gating network predicts the weight given to each expert's prediction and the overall value is the weighted sum.
They also provide a very practical method to train the experts and the gating network achieving SoTA performance beating the second best by significant margins.
Strengths: The paper is very well written with the motivations being made clear and experiments being clearly able to prove what the authors set out to. The number of experiments and metrics being covered are large enough to justify the claims in a border sense - with all the obvious baselines being addressed. I particularly liked the comparisons with the Mean and Global weight ensembles, and LPFormers.
The authors also provide an efficient and effective training strategy to build this E2E - which in my opinion can be understood and used easily by the community.
The paper also provides reasonable explanations for the results and conducts ablations (in section 5) required to improve understanding and readability for a potential user.
This result greatly improves on the SoTA baselines and should be generalisable to multiple graph use cases.
Weaknesses: - The method being used is fairly obvious and intuitive.
- It solves for a very specific task - Link Prediction in an inductive setting only. Not clear to me how it could be leveraged for a transductive setting as well
Technical Quality: 4
Clarity: 4
Questions for Authors: - Did you face any convergence issues with training the gating network ?
- What happens when you use a graph network which is transductive in nature ? Can that information be leveraged in some manner ?
- In the situations when the graph is a KG on language. Can LLM's outperform this method ?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The method is pretty generic and the societal impacts are not discussed in a lot of detail as it wont be different from existing methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer fyCt,
We sincerely appreciate your recognition of our framework and insightful comments. We are pleased to provide detailed responses to your questions.
**W2: It solves for a very specific task - Link Prediction in an inductive setting only. Not clear to me how it could be leveraged for a transductive setting as well.**
**A1:** We'd like to clarify that our paper actually focuses on the transductive setting, where the same graph is used during both the training and testing phases. In contrast, the inductive setting involves unseen nodes and connections in the test graphs. Since the vast majority of methods for link prediction tasks are designed specifically for the transductive setting (please refer to [14, 15, 25, 26, 13, 16, 27] in our paper), we have limited our focus to it as well.
However, due to the generality of our Link-MoE method, it should have no issue adapting to the inductive setting when the individual expert models are themselves inductive. In the inductive setting, we would train the gating model based on the heuristics calculated from the training graph. During inference, we would simply recalculate the heuristics based on the test graph before applying Link-MoE. We believe this approach would allow Link-MoE to perform effectively in inductive settings, and we intend to explore this in future work.
**Q1: Did you face any convergence issues with training the gating network?**
**A2:** We did not encounter any convergence issues with training the gating network in our Link-MoE model. Our model successfully converges on the datasets we used, as evidenced by the smooth training loss curves on Pubmed presented in Figure 2 in the global response.
**Q2: What happens when you use a graph network which is transductive in nature? Can that information be leveraged in some manner?**
**A3:** As discussed in A1, while our Link-MoE focuses on the transductive setting, it can be generalized to the inductive setting. During the training phase, we train the gating model based on the heuristics calculated from the training graph. During inference, we would recalculate the heuristics based on the test graph before applying Link-MoE.
**Q3: In the situations when the graph is a KG on language. Can LLM's outperform this method?**
**A4:** We appreciate the inspiring question.
Due to the uniqueness of KGs, the methods used for KGs differ from those used on non-KGs (i.e., the graphs used in our paper). This is because the type of structural information needed to accurately predict new links depends on the graph. On KGs, research shows that path-based information that connects the two nodes are necessary for strong performance [16]. On the other hand, for link prediction on non-KGs, heuristics such as common neighbors or feature similarity tend to be more important [17]. **Because of the disparity in inductive bias, in our paper we limit our focus to link prediction on non-KGs only**. However, due to the generality of our method Link-MoE, it should have no issue adapting to the link prediction on KGs, as we only need to use KG-specific methods as the experts.
Lastly, as of now there is no consensus on whether LLMs can outperform graph-based methods for link prediction on KGs. However, recent work has shown that for inductive link prediction, LLM-based methods can potentially outperform SOTA graph-based methods on KGs (see [1] below).
[1] Wang, Kai, et al. "LLM as Prompter: Low-resource Inductive Reasoning on Arbitrary Knowledge Graphs." arXiv preprint arXiv:2402.11804 (2024). | Summary: This paper presents a mixture of experts model, termed Link-MoE, for link prediction on graphs. Link-MoE individually trains various link prediction models as experts and selects the most appropriate experts for different node pairs. The prediction results from the selected experts are then weighted to produce the final predication. Experiments and analyses validate the effectiveness of the proposed method.
Strengths: 1. This paper is generally well-written and easy to follow. Necessary analyses are conducted to well motivate the design of the proposed method.
2. The proposed method is presented as a general framework, so that various link prediction models (i.e., experts) can be combined via different weights generated by the gating function.
3. The experimental design can largely validate the efficacy of the proposed method.
Weaknesses: 1. This paper presents a straightforward application of the MoE model to the problem of link prediction. Despite its empirical effectiveness, the technical novelty appears limited. The methodological contributions could be enhanced by addressing specific problems associated with MoE models, e.g., the collapse problem, in the context of link prediction.
2. The experimental evaluation could provide more insights and deeper analyses on important issues, for example,
- How does Link-MoE determine the appropriate number of experts?
- On heterophilic graphs, how the experts selected by Link-MoE differ from those selected on homophiles graphs?
3. The complexity of the proposed method can be high. The nature of using MoE makes the proposed method difficult to scale up.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses above.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations in terms of the scalability.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer bwf6,
We appreciate your constructive feedback. We are pleased to provide detailed responses to address your concerns.
**W1:Despite its empirical effectiveness, the technical novelty appears limited.**
**A1:** The novelty of Link-MoE. Our Link-MoE model stands out by intelligently leveraging the complementary strengths of different GNN4LP models, resulting in substantial performance gains. The core of our model's success lies in the following findings and innovations:
*1. GNN4LP Models are Complementary:* Our preliminary studies show that the overlapping between different GNN4LP models is notably low, shown in Figure 3, 9 and 10 in our paper, indicating these models offer complementary insights for link prediction. Different node pairs may be best addressed by different models. The critical challenge, then, is determining the most effective way to assign these node pairs to the appropriate GNN4LP models.
*2. Heuristic-Informed Gating Mechanism:* Our preliminary studies revealed that different GNN4LP models perform differently across different heuristic groups, shown in Figure 4, 11 and 12 in our paper. By designing a gating model that intelligently applies these heuristics, we facilitate the optimal assignment of node pairs to their most suitable predictors. This strategic use of heuristics marks a significant departure from traditional MoE applications, which use the same input features for gating and experts [1]. We compare the MoE with traditional gating with Link-MoE. The results are shown in Table 1 (Traditional MoE) in the global response. Traditional MoE only results in comparable performance to the best single experts, while our approach yields superior performance.
[1] Towards Understanding Mixture of Experts in Deep Learning. NeurIPS'22
**The Collapse problem.** In traditional MoE models, the collapse problem [37] can occur when a single expert is consistently selected, resulting in the under-utilization and inadequate learning of other experts. However, we employ a two-step training strategy for the Link-MoE. First, we train each expert separately. Then we train the gating model to leverage the strengths of each expert effectively. This approach ensures that each expert is well-trained before the gating model is introduced. We empirically observe that there does not exist collapse problem in our model. As illustrated in Figure 8 of this paper, different experts are activated to model different node pairs.
We also tested the end2end training of the experts and gating models, as shown in Table 3 and Figure 1 in the global response. Our results show that the convergence speed of different experts varies significantly, which often leads to the model collapsing to a single expert. As a result, the performance of end2end training is not as good as the proposed Link-MoE. Despite this, it remains an interesting and challenging idea to explore the effective end2end training.
**W2.1:How does Link-MoE determine the appropriate number of experts?**
**A2:** There are mainly three types of pairwise structural information in link-prediction tasks: local structure proximity (LSP), global structure proximity (GSP), and feature proximity (FP). Different GNN4LP models leverages different heuristic information. Based on these, we classify these methods into three groups: LSP (NeoGNN, NCN, NCNC, n2v), GSP (Seal, Buddy, NbfNet, PEG), and FP (MLP, GCN).
For the selection of experts, it is suggested to include methods that cover all groups due to the complexity of connection patterns across different datasets. To verify this, we conducted an experiment on the ogbl-collab dataset. When using all the models, the Hits@50 score is 71.32. However, if we remove all LSP experts, the Hits@50 score drops to 67.93. Interestingly, if we include only one LSP expert, the Hits@50 score can reach 71.25. This demonstrates the necessity of local structure information for the ogbl-collab dataset, but it also indicates that only one model may be sufficient to capture this information effectively.
Furthermore, our experiments show that using just a few experts (3 or 4) can achieve similar performance levels to using all experts, as shown in A4.
**W2.2:On heterophilic graphs, how the experts selected by Link-MoE differ from those selected on homophiles graphs?**
**A3:** We analyzed the weights generated by the gating mechanism on two heterophilic graphs, Chameleon and Squirrel. The results are shown in Figure 3 in the global response. We observed that feature proximity-based experts, such as MLP and GCN, are rarely employed for both datasets. This is consistent with the characteristics of heterophilic graphs, where connect nodes tend to have dissimilar features. These findings demonstrate the effectiveness of our gating design, as it accurately selects the appropriate experts for different types of graphs.
**W3:The complexity of the proposed method can be high.**
**A4:** The proposed Link-MoE model employs a two-step training strategy, which first trains the single experts and then trains a lightweight gating model. The most time-consuming part is the training of the single experts. However, as discussed in A2, there is no need to use all available experts, which can effectively reduce complexity and enhance scalability. we conduct experiments on ogbl-collab and Pubmed by only using 3 or 4 experts (please refer to Different number of experts for the expert selection details in the global response). The results shown in Table 1 (3 Experts & 4 Experts).
From the results, we can find that only 3 or 4 experts can achieve comparable performance with using all experts in the paper. Additionally, the two-step training strategy of our Link-MoE model introduces another layer of efficiency by allowing the integration of pre-trained models. In real-world scenarios where models are continually updated, this approach is highly efficient: when a new model arrives, we only need to train the gating model. | Rebuttal 1:
Rebuttal: # Global Response
We thank the reviewers for the valuable comments and suggestions. In this global response, we are willing to provide information about tables and figures in the rebuttal pdf file.
**Table 1** presents the results of traditional MoE, a few experts, and experts used as input features for the ogbl-collab and Pubmed datasets. The evaluation metrics are Hits@50 and MRR for ogbl-collab and Pubmed, respectively.
1. Traditional MoE. It uses the same input features for the gating model and experts. Traditional MoE only results in comparable performance to the best single expert, while our approach yields superior performance.
2. Different number of experts. For the ogbl-collab, we use MLP, NCNC, BUDDY (3 experts) and Neo-GNN (4 experts); for Pubmed, we use NCN, SEAL, NCNC (3 experts) and MLP (4 experts). From the results, we can find that only 3 or 4 experts can achieve comparable performance with using all experts in the paper. Notably, we don't use the computationally intensive SEAL for the ogbl-collab datasets. Furthermore, the inclusion of the less effective MLP expert in the Pubmed dataset still results in performance improvement, highlighting the complementary nature of the experts and the effectiveness of the proposed method.
3. With Experts as Input. We concatenate the prediction results of experts with the heuristic features as the input feature of gating model. Results show that this additional input did not lead to improvement.
**Table 2** compares the results of ensemble methods with our model. The evaluation metrics are Hits@50 and MRR for ogbl-collab and Pubmed, respectively. Ensemble [1] and Ensemble [2] refer to two ensemble methods specifically designed for link prediction tasks. As shown, our model outperforms these ensemble methods.
[1] Stacking Models for Nearly Optimal Link Prediction in Complex Networks, PNAS'20
[2] An Ensemble Model for Link Prediction based on Graph Embedding, Decision Support Systems'22
**Figure 1** is derived from results when training experts and gating in an end2end way on ogbl-collab. Figure 1(a) shows the overall and each expert’s performance. Figure 1(b) shows the gating weights of each expert.
**Figure 2** illustrates the convergence process of training for our Link-MoE model on the Pubmed dataset, demonstrating a smooth training loss curve.
**Table 3** shows the results of end2end training on the ogbl-collab and Cora datasets. The evaluation metrics are Hits@50 and MRR for ogbl-collab and Cora.
1. Cora: we employed eight models (MLP, node2vec, GCN, NeoGNN, NCN, NCNC, SEAL, and BUDDY) and used the gating model to select the top-3 experts for each node pair.
2. ogbl-collab: we used four models (MLP, GCN, NCN, and NCNC) and selected the top-2 experts.
**Figure 3** displays the weight distribution generated by our gating model for two heterophilic graphs: Chameleon and Squirrel datasets. We can see that feature proximity-based experts, such as MLP and GCN, are rarely employed for both datasets, which is consistent with the characteristics of heterophilic graphs, where connect nodes tend to have dissimilar features.
**Table 4** presents the results of our Link-MoE model under the HeaRT setting [53] for the ogbl-collab and Pubmed datasets. It is evident that our model achieves better performance even under this challenging HeaRT setting. This highlights the effectiveness of our approach in leveraging the strengths of multiple experts to achieve superior performance.
Pdf: /pdf/fe06ceab101e8f7bb5da928298e90c8ccd223d75.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Zeroth-Order Sampling Methods for Non-Log-Concave Distributions: Alleviating Metastability by Denoising Diffusion | Accept (poster) | Summary: The authors investigate a popular reverse-diffusion sampling process, when the scores are estimated not from target samples but using the target's unnormalized density, which is the setup in energy-based modelling. Specifically, the authors consider a recent Monte-Carlo estimator of the scores which requires sampling from a possibly multimodal distribution (product of target and Gaussian densities). To do this, the authors use rejection sampling and obtain an upper bound on the convergence of their method. They also empirically validate their method in low dimensions with thorough experiments that monitor and benchmark against other methods:
- **convergence** in MMD, W2, and generic statistics vs. **cost** in dimension or target queries) in Figure 1
- quantifying **mode coverage** in Figure 4
as well as other visual diagnostics of convergence.
Strengths: The paper is clearly written and the results are interesting. Their experiments are thorough, I appreciate the use of the $W2$ in Fig 1.a. as it is sensitive to mode coverage which is less the case of some other sampling metrics.
Weaknesses: The authors are clear about the limitations of their method, for example the complexity in Corollary 3.1. is exponential not only in the dimension $d$ but also in the smoothness $L$ of the target (this holds even in one dimension).
Technical Quality: 3
Clarity: 3
Questions for Authors: -
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their valuable advice and comments and greatly appreciate the positive evaluation.
>The authors are clear about the limitations of their method, for example the complexity in Corollary 3.1. is exponential not only in the dimension d but also in the smoothness L of the target (this holds even in one dimension).
That is right. Just to further remark about this fact: For general target distributions as considered in our paper, it was shown in [1] that the diffusion posterior sampling is computationally intractable in the worse case, and that worse case is among the distributions considered in our paper. Therefore, we feel if the theoretical complexity needs to be improved, one possibility is to restrict the target distributions to have more structures, while possibly adapting our algorithm to these structures to get better smoothness/dimension-dependence.
[1] Gupta, Shivam, et al. "Diffusion posterior sampling is computationally intractable." arXiv preprint arXiv:2402.12727 (2024).
---
Rebuttal Comment 1.1:
Title: Answer to authors
Comment: I thank the authors for their answer. | Summary: In this paper, the authors are interested in the problem of sampling from an arbitrary non-logconcave probability distribution (namely, multi-modal distribution) with only access to its unnormalized density. While most of popular sampling methods rely on queries of the score of the target, i.e. the gradient of the log-density (1st-order methods), the proposed approach is a 0th-order sampling method, as it relies on queries of the unnormalized target density itself. Inspired by the performance of denoising diffusion models for generative tasks, they propose a sampling algorithm, Zeroth-Order Diffusion Monte Carlo (ZOD-MC), which simulates a discretized and approximate version of the time-reversal of the standard Ornstein-Uhlenbeck diffusion process (the ideal scheme being known to converge to the target). In this algorithm, rejection-sampling is used to provide a Monte Carlo estimate of the intractable scores of the process marginals that appear in the recursion. This is made possible by the Tweedie's formula, that links the score of the marginals to an expectation over the posteriors of the model. As rejection sampling is known to suffer from the curse of dimensionality, the authors acknowledge that ZOD-MC cannot address sampling problems with relatively high dimensions ($d\geq10$). To support their algorithm, they provide a convergence analysis which requires a weak assumption on the target (bounded second order moment) and a control of the Monte Carlo estimation error for the posterior distributions at any time of the process. Under an extra-assumption on this control, they derive a readable oracle complexity of ZOD-MC that quantifies the number of queries of the target density in the whole sampling procedure. Finally, they conduct numerical experiments on low dimensional settings (mostly $d=2$), by considering multi-modal distributions which increasing higher energy barriers or with discontinuous potential. They compare ZOD-MC to recent approaches based on denoising approaches [1,2] and standard MCMC approaches such as Langevin or parallel tempering. In the considered settings, ZOD-MC exhibits the best performance at equivalent computational budget (namely, same oracle complexity)
[1] Reverse Diffusion Monte Carlo. Huang et al. 2023.
[2] Faster Sampling without Isoperimetry via Diffusion-based Monte Carlo. Huang et al. 2024
Strengths: - The paper is didactic in the sense that the motivations and the bricks to build the algorithm ZOD-MC are exposed in a logical way.
- The authors pay attention to bring intuition on their theoretical results.
- The mathematical results are clear to understand, especially Theorem 1.
- Although the numerical experiments are conducted on low dimensional settings, they consider interesting cases, i.e., sampling from multi-modal distributions where the modes are increasingly further from each other and Gibbs distributions with discontinuous potential.
Weaknesses: In my opinion, this paper suffers from 3 main weaknesses, which explains my score.
1. The comparison to the related work is incomplete. The authors do not cite the work [1] (although it was published approximately at the same time as [2]), where the authors propose a 1st-order sampling method, SLIPS, based on stochastic localization, that can be linked to a denoising diffusion model. In SLIPS, samples from the target distribution are obtained by following a diffusion process with intractable drift, which is nothing less than an expectation of a posterior distribution; in practice, the authors estimate this term by MCMC method (Langevin). In contrast to RDMC or ZOD-MC, SLIPS may exhibit multiple advantages: (i) it can be applied to a flexible class of stochastic localization schemes, whereas RDMC and ZOD-MC only consider the standard Ornstein-Uhlenbeck diffusion process, (ii) it provides an exact finite-time setting for sampling (which is not the case for RDMC or ZOD-MC), (iii) it is shown to scale well in dimension in their experiments (up to dimension 100), (iv) it has a small oracle complexity per score evaluation compared to the numerics presented here (32 vs 200 at least in the numerics of the current paper). Since the current approach has the same spirit as SLIPS (diffusion-based approach with MC estimation), it should be compared with it. Moreover, no numerical comparison with AIS or SMC (which are considered gold-standard to sample from low-dimensional multi-modal distributions) is provided. For me, this justifies why this paper does not meet the standards of NeurIPS.
2. In my opinion, the proposed approach misses a fundamental point, which lies in the MC estimation of the denoiser (i.e. the conditional expectation over the posterior distribution): it is linked to the ability to be able to provide an accurate MC estimate near at $t_0=0$, equivalently $t=T$, namely when $p_t$ is fully noised. When $t\approx T$, it is clear that the corresponding posterior distribution is approximately equal to the target distribution itself (since we have full noise, the quadratic term vanishes). Therefore, applying rejection sampling on the posterior distribution at the very beginning of the sampling process in ZOD-MC is equivalent to do rejection sampling on the target distribution itself ! Hence, in practice, **the whole diffusion-based approach presented in ZOD-MC to sample from the target distribution via an annealing scheme is actually as hard as directly sampling from the target distribution**, which hurts its usefulness. This remark is intimately linked to the notion of duality of log-concavity presented in [1], which explains that it is required to start the diffusion process further such that the posterior distribution is "smoother", and easier to sample from. On the theoretical side, I have a concerns that is linked to my point: it seems to me that the authors make an assumption on the MC error $\delta(t)$ to estimate the denoiser (which is not explicited in the main) in order to derive the total complexity cost (hidden in Section B.5 in the appendix); as said above, this control is actually crucial in practice and it is not straightforward to verify an upper bound on it.
3. The section on numerical experiments is incomplete for several reasons: no indication of tuning of the hyperparameters ($T$, $N$, step-size) either for ZOD-MC or other approaches, no ablation study on these hyperparameters to exhibit robustness, display of the estimation error of the true score for a large $t$ (where it becomes harder) and a variety of levels of oracle complexity, the plots in Figure 1 and Figure 4.a do not display any variance, missing related work as stated in my claim 1.
[1] Stochastic Localization via Iterative Posterior sampling. Grenioux et al. 2024
[2] Faster Sampling without Isoperimetry via Diffusion-based Monte Carlo. Huang et al. 2024
Technical Quality: 2
Clarity: 1
Questions for Authors: - I have a question about Algorithm 1: to obtain the sampling recursion with the MC estimator, you first discretize the SDE with exponential integration and then replace the score by its MC estimator. Would it bring less discretization error to first replace the score in the SDE by the term involving the conditional expectation and then apply exponential integration upon it ? It seems to me that exponential integration would still be tractable in this case.
- To support my claim 2 in the 'Weaknesses' section, could the authors provide (i) the value of the average score error between the true score and the approximated score near $t=T$, while varying the number of queries to the target distribution (ie oracle complexity per score evaluation) and (ii) results of rejection sampling directly applied on the target distribution with same oracle complexity as other methods ?
Confidence: 5
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: The main (and crucial) limitation of ZOD-MC is the fact that it can only be applied to small dimensional settings ($d\leq 5$), as rejection sampling is notoriously known to suffer from the curse of dimensionality (which is the reason why it is not a popular sampling method in practice). Although the authors acknowledge this limitation in the abstract, it is not well highlighted at all in the main of the paper: this limitation appears explicitly in Remark 5 in Section 3.3 in the Appendix, but not in the main paper. As suggested by NeurIPS guidelines, a section/paragraph 'Limitations' should be given in the main.
On the other hand, ZOD-MC requires to know the location of local minima of the energy landscape, i.e., the location of the modes of the target distribution. As indicated by the authors, this can be done by applying Newton's method on the target potential (see Remark 1) before or while sampling; however, this is a 2nd-order method, which violates the framework chosen by the authors (0-th order method). Although it is a common assumption to have access to the location of the modes in realistic settings (which turns out to be another challenge of sampling procedures), this requirement should be much more highlighted. Otherwise the title of the paper is misleading.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their valuable advice and comments.
> The comparison to the related work is incomplete. The authors do not cite the work [1] (although it was published approximately at the same time as [2]), where the authors propose a 1st-order sampling method, SLIPS, based on stochastic localization, that can be linked to a denoising diffusion model. In SLIPS, samples from the target distribution are obtained by following a diffusion process with intractable drift, which is nothing less than an expectation of a posterior distribution; in practice, the authors estimate this term by MCMC method (Langevin). In contrast to RDMC or ZOD-MC, SLIPS may exhibit multiple advantages: (i) it can be applied to a flexible class of stochastic localization schemes, whereas RDMC and ZOD-MC only consider the standard Ornstein-Uhlenbeck diffusion process, (ii) it provides an exact finite-time setting for sampling (which is not the case for RDMC or ZOD-MC), (iii) it is shown to scale well in dimension in their experiments (up to dimension 100), (iv) it has a small oracle complexity per score evaluation compared to the numerics presented here (32 vs 200 at least in the numerics of the current paper). Since the current approach has the same spirit as SLIPS (diffusion-based approach with MC estimation), it should be compared with it. Moreover, no numerical comparison with AIS or SMC (which are considered gold-standard to sample from low-dimensional multi-modal distributions) is provided. For me, this justifies why this paper does not meet the standards of NeurIPS.
We apologize for not having cited the very related work [1]. We were not aware of it at the time our manuscript was submitted.
Similar to ZOD-MC, SLIPS is also based on the denoising diffusion model and an MCMC score estimation. We have added a comparison between ZOD-MC and SLIPS in our updated manuscript. While SLIPS is great work, we hope to summarize three **major differences** between ZOD-MC and SLIPS: (1) SLIPS relies on MALA to approximate the score. As a result, to analytically establish convergence of SLIPS requires **log-concavity outside a ball assumption** due to the difficulty in analyzing MALA. The goal of our paper is to investigate a non-logconcave sampling algorithm **without any convexity-related assumption**. Therefore, we choose to approximate the score via rejection sampler, which can be analyzed under mild smoothness conditions. (2) SLIPS uses MALA to generate initial point: the denoising process is simulated from the middle of the observation process. This adds extra difficulty on initialization: **the initialization error is hard to control numerically and analytically in general**. In contrast, **the initialization error in ZOD-MC can be controlled** by starting at a large time $T$ in the forward process. (3) ZOD-MC can sample from non-differentiable and even **discontinuous** densities.
Numerically, we have updated our experiments to include comparisons with SLIPS, AIS and SMC. We started by running these methods under the same set up as in the experiment in Figure 1. For SLIPS we used the same initialization as used in their code ($\mathcal{N}(0,5)$ + LMC steps) and initialize AIS and SMC with $\mathcal{N}(0,5)$. Under this set up we find that all of the methods perform quite well, despite this ZOD-MC still has the best performance out of all methods. We further demonstrate that our method is less sensitive to the initial condition by initializing the methods with $\mathcal{N}(0,1)$. We then show that these methods still suffer from metastability and are unable to sample from all modes even at different oracle complexities.
>In my opinion, the proposed approach misses a fundamental point, which lies in the MC estimation of the denoiser (i.e. the conditional expectation over the posterior distribution): it is linked to the ability to be able to provide an accurate MC estimate near at $t_0=0$, equivalently $t=T$, namely when $p_t$ is fully noised. When $t\approx T$ , it is clear that the corresponding posterior distribution is approximately equal to the target distribution itself (since we have full noise, the quadratic term vanishes). Therefore, applying rejection sampling on the posterior distribution at the very beginning of the sampling process in ZOD-MC is equivalent to do rejection sampling on the target distribution itself! Hence, in practice, **the whole diffusion-based approach presented in ZOD-MC to sample from the target distribution via an annealing scheme is actually as hard as directly sampling from the target distribution**, which hurts its usefulness. This remark is intimately linked to the notion of duality of log-concavity presented in [1], which explains that it is required to start the diffusion process further such that the posterior distribution is "smoother", and easier to sample from. On the theoretical side, I have a concerns that is linked to my point: it seems to me that the authors make an assumption on the MC error $\delta(t)$ to estimate the denoiser (which is not explicited in the main) in order to derive the total complexity cost (hidden in Section B.5 in the appendix); as said above, this control is actually crucial in practice and it is not straightforward to verify an upper bound on it.
Apology for a serious misunderstanding of our main results (Proposition 3.1, Theorem 1 and Corollary 3.1), but we're afraid the comment "the whole diffusion-based approach presented in ZOD-MC to sample from the target distribution via an annealing scheme is actually as hard as directly sampling from the target distribution" is **not** true.
---
We would like to kindly direct the reviewer to "Comment" for the response to the rest of the review. We sincerely apologize for exceeding the character limit but we eagerly hope to thoroughly address the reviewer's concerns.
---
Rebuttal 2:
Comment: The key reason is we don't need a high-quality sample to approximate the score well for large $t$. As explained in line 173, when $t=T$ (which is chosen to be $\Theta(\log(d/\varepsilon))$ in Corollary 3.1), we only need one low-quality ($d$-accuracy in $W_2$) sample from $p_{0|T}$ to ensure the score estimation error is $O(\varepsilon)$. Therefore, even though the dual-logconvavity arguement in [1] says sampling from $p_{0|T}$ is comparable hard to sampling from the target, the diffusion-based sampling approach make things easier because it only requires a low-quality sample from $p_{0|T}$. **Regarding the reviewer's question on the theoretical side**, our Corollary 3.1 actually proves a worst-case complexity: we run the rejection sampler until it generates an accurate sample ($\delta(t)=0$). **This complexity strictly upper bounds what we need in practice:** as we addressed previously, we don't need a high-quality sample from $p_{0|t}$ when $t$ is large. Therefore, in practice we can set a threshold on the number of rejections to get a relative low-quality sample: if no proposal is accepted within this threshold value, we simply approximate the score by $-\frac{x}{1-e^{-2t}}$ (inspired by Lemma 1 in the paper). In this way, we save the number of queries without hurting the score-estimation accuracy.
> The section on numerical experiments is incomplete for several reasons: no indication of tuning of the hyperparameters (N, T, step-size) either for ZOD-MC or other approaches, no ablation study on these hyperparameters to exhibit robustness, display of the estimation error of the true score for a large (where it becomes harder) and a variety of levels of oracle complexity, the plots in Figure 1 and Figure 4.a do not display any variance, missing related work as stated in my claim 1.
For ZOD-MC, since N,T and the step-size are values that are theoretically and empirically well understood, we didn't even tune them. We focus instead on understanding the sample quanlity with different oracle complexities and different target distributions. For other methods, we used either the official codes, or our best tuned versions. The reviewer is right that we should have explicitly mentioned this.
Regarding the score estimation error plots in Fig 1 & 4a, we have added the comparison to SLIPS, as displayed in Fig 1 in rebuttal supplementary pdf. To add the variance takes considerable amount of time, and we will add it in the revised manuscripts.
>I have a question about Algorithm 1: to obtain the sampling recursion with the MC estimator, you first discretize the SDE with exponential integration and then replace the score by its MC estimator. Would it bring less discretization error to first replace the score in the SDE by the term involving the conditional expectation and then apply exponential integration upon it ? It seems to me that exponential integration would still be tractable in this case.
Thanks for the interesting question. It is true that we can approximate the score before applying the exponential integrator scheme. **However, we don't think this would necessary decrease the order of the discretization error.** From the analytical perspective, the discretization error factor in line 506-507 will change from $\lVert \nabla \ln p_{T-t}( \bar{X_t} ) -\nabla \ln p_{T-t_k}( \bar{X_{t_k}} ) \rVert^2$ to $\lVert \nabla \ln p_{T-t}( \bar{X_t} ) -\nabla \ln p_{T-t}( \bar{X_{t_k}} ) \rVert^2$. Noticing that in our paper, instead of splitting the first error via triangle inequality which would make the error bound strictly bigger than the second error, we analyzed it by looking at the dynamics of $\mathrm{d} \nabla\log p_{T-t}(\bar{X_t})$. This finer analysis help us obtain an upper bound of order $O(t-t_k)$. We believe this upper bound order also applies to the second error.
---
We would like to kindly direct the reviewer to "Comment" for the response to the rest of the review. We sincerely apologize for exceeding the character limit but we eagerly hope to thoroughly address the reviewer's concerns.
---
Rebuttal 3:
Comment: > To support my claim 2 in the 'Weaknesses' section, could the authors provide (i) the value of the average score error between the true score and the approximated score near t=T, while varying the number of queries to the target distribution (ie oracle complexity per score evaluation) and (ii) results of rejection sampling directly applied on the target distribution with same oracle complexity as other methods ?
Re (i): we added numerical comparison among the score estimation errors at $T$ for ZOD-MC, RDMC, RSDMC and SLIPS across different oracle complexities. We consider two target distributions (one in $2d$ and one in $5d$). In both cases, score estimation error in ZOD-MC is smaller than that in RDMC and comparable to those in RSDMC and SLIPS.
Re (ii): Rejection Sampling (RS) requires to construct an upper envelope to generate an accurate sample, and it is not known how to do that for general distributions. RS also suffers from the metastability. For example, to sample GMMs, the complexity can exponentially depend on the distance between modes. However, when combined with diffusion model, this issue can be alleviated because the intermediate target $p_{0|t}$ has more concentrate modes and we don't require very accurate samples when $t$ is large. To provide more evidence of this we make an experiment with a simple target distribution and demonstrate that rejection sampling is unable to sample from it. The example is a GMM with means $(0,0), (5,5), (-6,-8)$ and covariances $.1 I$, we then construct an envelope using knowledge of the means and stds of each mode (notice that this information is not generally available). We then create $5 \cdot 10^7$ proposals and only $3$ get accepted. When using the same oracle complexity our method is able to correcly sample from the target distribution.
>The main (and crucial) limitation of ZOD-MC is the fact that it can only be applied to small dimensional settings ($d\le 5$), as rejection sampling is notoriously known to suffer from the curse of dimensionality (which is the reason why it is not a popular sampling method in practice). Although the authors acknowledge this limitation in the abstract, it is not well highlighted at all in the main of the paper: this limitation appears explicitly in Remark 5 in Section 3.3 in the Appendix, but not in the main paper. As suggested by NeurIPS guidelines, a section/paragraph 'Limitations' should be given in the main.
Besides mentioning the curse dimensionality in the abstract, we also mention the exponential dimenison dependency explicitly in Remark 4 in the main part of the paper. We will highlight this in the updated manuscript. However, it is not true that we can only handle $d\leq 5$ in fact in the experiment in Figure 1b we sample a problem of dimenson $7$ with high accuracy.
>On the other hand, ZOD-MC requires to know the location of local minima of the energy landscape, i.e., the location of the modes of the target distribution. As indicated by the authors, this can be done by applying Newton's method on the target potential (see Remark 1) before or while sampling; however, this is a 2nd-order method, which violates the framework chosen by the authors (0-th order method). Although it is a common assumption to have access to the location of the modes in realistic settings (which turns out to be another challenge of sampling procedures), this requirement should be much more highlighted. Otherwise the title of the paper is misleading.
We agree ZOD-MC also needs an oracle about the minimum value of the potential. It can be relaxed to a global lower bound of $V$. Nevertheless, we agree it still needs to be implemented. Our practical implementation was mentioned in Remark 1, where we get an approximation using Newton's method, and like the reviewer mentioned, this is where 2nd-order information starts being used. Although other methods can also approximate the oracle, we agree with the reviewer and will clarify the writing and highlight what exactly are needed.
[1] Louis Grenioux et al. "Stochastic Localization via Iterative Posterior Sampling." ICML'24.
---
Rebuttal Comment 3.1:
Title: Answer to the rebuttal
Comment: First, I would like to thank the authors for their precise answers. I would like to comment them point by point.
### 1. About the related work [1] (SLIPS)
I completely agree with the remarks made by the authors on the differences between SLIPS and ZOD-MC. I would like to add the nuance that the present work and [1] have actually different messages: the contribution of [1] is above all methodological, designed for high dimensional target distributions, with lots of practical guidelines, and does not provide detailed convergence rates as done here (which makes the theoretical comparison quite difficult between the two works). In addition, I believe that SLIPS could handle discontinuous densities by replacing MALA steps with RWMH steps (even if it is not considered in the original paper).
Thank you for the new experiments including SLIPS, it is much appreciated. I have several questions about them:
- Which setting for SLIPS do you consider ? The authors propose one setting in asymptotic time (that can be seen as a sort of OU time-reversal process) and one setting in finite time (that can be seen as a certain stochastic interpolant), which seems to more practical to use in practice.
- In your rebuttal, you explain that you start the Langevin-within-Langevin initialization of SLIPS at $N(0,5)$, however I am confused about this choice. The SLIPS methodology relies on a SDE starting time $t_0\in (0,T)$ ($T=\infty$ in the asymptotic setting, $T=1$ in the finite time setting), where both marginal and posterior distributions of the denoising process are expected to be approximately log-concave. Then, the starting marginal distribution is given by $N(0, \sigma^2 t_0)$, where $\sigma^2$ is a rough estimation of the variance of the target distribution. In practice, the authors of [1] explain that $t_0$ has to be tuned wrt the target distribution. Following this: how did you choose $t_0$ ? Is there a link between $N(0,5)$ and $N(0, \sigma^2 t_0)$ to ensure fair comparison with the setting of [1] ?
### 2. About the score estimation error for large $t$
I would like to thank the authors for their explanation. Indeed, looking at Proposition 3.1, it appears that for large $t$, a sampling error on the posterior distribution at time $t$ of order $O(\exp(t) \epsilon)$ is actually sufficient to have a score estimation error of order $O(\epsilon)$. This explains why it is "less important" to have good sample quality near for large $t$ than low $t$, I understand. This also explains Figures 2 and 3 in the additional experiments. However, I still have concerns about how this turns in practice:
- Suppose that $T$ is of order $\log(d/\epsilon)$. How can you ensure that the samples obtained at the rejection sampling lead to a sampling error of order $d$ (for the Wasserstein 2) ? I still don't catch it.
- As far as I understand, with the choice $T=\Theta \log(d/\epsilon)$, only 1 sample of the posterior distribution may be sufficient to obtain the $\epsilon$ bound on the score error. You explain that you fix a maximum number of rejection steps to get this sample: what is the value of this threshold ? Does it depend on the target ? Is ZOD-MC sensitive to this ? Is it often reached in the experiments ?
- If this maximum number is reached, you explain that the score estimator is fixed to $-x/(1-\exp(-2t))$. In this case, the error at time $t$ with the true score for large $t$ is given by $\exp(-t)\|E_{\pi}[X]\|$. With $T=\Theta \log(d/\epsilon)$, this leads to a score error of order $\epsilon\|E_{\pi}[X]\|/d$. In the case where $\|E_{\pi}[X]\|>>0$, may this estimation induce issues ?
### 3. More general remark
You explain the hyperparameters of ZOD-MC are given by the theoretical results. Then, could you detail the analytical formulas of $T$, $N$, $\gamma$ used in the experiments depending on the target distribution and how there are set numerically (I guess depending on a certain value of $\epsilon$) ?
[1] Louis Grenioux et al. "Stochastic Localization via Iterative Posterior Sampling." ICML'24.
---
Reply to Comment 3.1.1:
Comment: >Thank you for the new experiments including SLIPS, it is much appreciated.
We thank the reviewer for recognizing our comparison to SLIPS.
>I completely agree with the remarks made by the authors on the differences between SLIPS and ZOD-MC. I would like to add the nuance that the present work and [1] have actually different messages: the contribution of [1] is above all methodological, designed for high dimensional target distributions, with lots of practical guidelines, and does not provide detailed convergence rates as done here (which makes the theoretical comparison quite difficult between the two works). In addition, I believe that SLIPS could handle discontinuous densities by replacing MALA steps with RWMH steps (even if it is not considered in the original paper).
Thanks for the additional comments about SLIPS. Since we mentioned our approach (ZODMC) works for discontinuous densities, the reviewer suggested a future possibility of adapting SLIPS to discontinuous densities as well. That is an interesting idea and we'd love to see that implemented (and analyzed?). Meanwhile, we hope the reviewer could agree with us that this doesn't mean ZODMC is not worth publishing.
We also very much appreciate that the reviewer could mention that [1] does not provide detailed convergence rates as done here (in this submission). We feel the scopes of SLIPS and ZOD-MC are a bit complementary to each other, and are personally excited about having multiple contributions to a vibrant new field. Of course, like we repeated, we will properly describe SLIPS in a future revision; although we really hope the reviewer could appreciate our work, we will do so no matter what the fate of this submission would be.
>>Which setting for SLIPS do you consider ? The authors propose one setting in asymptotic time (that can be seen as a sort of OU time-reversal process) and one setting in finite time (that can be seen as a certain stochastic interpolant), which seems to more practical to use in practice.
We consider the finite time setting with $\alpha_1=\alpha_2=1$ in [1].
>>In your rebuttal, you explain that you start the Langevin-within-Langevin initialization of SLIPS at $\mathcal{N}(0,5)$, however I am confused about this choice. The SLIPS methodology relies on a SDE starting time $t_0\in (0,T)$ ($T=\infty$ in the asymptotic setting, $T=1$ in the finite time setting), where both marginal and posterior distributions of the denoising process are expected to be approximately log-concave. Then, the starting marginal distribution is given by $\mathcal{N}(0,\sigma^2 t_0)$, where $\sigma^2$ is a rough estimation of the variance of the target distribution. In practice, the authors of [1] explain that $t_0$ has to be tuned wrt the target distribution. Following this: how did you choose $t_0$? Is there a link between $\mathcal{N}(0,5)$ and $\mathcal{N}(0,\sigma^2 t_0)$ to ensure fair comparison with the setting of [1] ?
We apologize for not giving enough details about the initialization used in our added experiments: we initialize at $\mathcal{N}(0,\sigma^2 t_0)$, where $t_0$ was set to $0.35$ and $\sigma$ was set to $5$. This choice of $\sigma$ is a rough order-of-magnitude-estimation of the standard deviation of the target, which corresponds to specific values of $x,y$-marginal stds approximately being $4.8547$ and $5.0671$. As suggested by the reviewer, in order to ensure an even fairer comparison, we have tuned the parameter $t_0$ even further and gotten the following table:
| t_0 | 1e-7 | 1e-6 | 1e-5 | 1e-4 | 1e-3 | 1e-2 | 1e-1 | 0.2 | 0.3 | 0.35 | 0.4 | 0.5 |
|------|-----------|-----------|----------|--------|--------|--------|--------|--------|--------|--------|--------|--------|
| W2 | 17962.07031 | 5496.64355 | 653.80280 | 5.21865 | 5.10800 | 5.54249 | 6.77297 | 7.53148 | 8.11765 | 8.53969 | 8.67329 | 9.26004 |
As shown in the table, a better choice of $t_0$ would be $t_0 = 1e-3$. We will gladly update the presented experiment with this improved value. Despite this, ZOD-MC still has the best performance ($W_2 \approx 4$) with**out** requiring careful tuning of the hyperparameters.
>I would like to thank the authors for their explanation. Indeed, looking at Proposition 3.1, it appears that for large $t$, a sampling error on the posterior distribution at time $t$ of order $O(\exp(t)\varepsilon)$ is actually sufficient to have a score estimation error of order $O(\varepsilon)$. This explains why it is "less important" to have good sample quality near for large $t$ than low $t$, I understand. This also explains Figures 2 and 3 in the additional experiments. However, I still have concerns about how this turns in practice:
We thank the reviewer for recognizing our MC estimation idea. Next we address the reviewer's further questions on the rejection sampling step.
---
Reply to Comment 3.1.2:
Comment: >>Suppose that $T$ is of order $\log(d/\varepsilon)$. How can you ensure that the samples obtained at the rejection sampling lead to a sampling error of order $O(d)$ (for the Wasserstein 2) ? I still don't catch it.
Thanks for a great question. We can obtain a sample within the error of order $O(d)$ because we use, in theory, sufficiently many proposals in the rejection sampling to get an accepted sample. This sample is unbiased and its $W_2$ error is 0, thus certainly $O(d)$. More details to be continued below.
>>As far as I understand, with the choice $T=O(\log d/\varepsilon)$, only 1 sample of the posterior distribution may be sufficient to obtain the $\varepsilon$ bound on the score error. You explain that you fix a maximum number of rejection steps to get this sample: what is the value of this threshold ? Does it depend on the target ? Is ZOD-MC sensitive to this ? Is it often reached in the experiments ?
Thanks for more great questions. In theory, this threshold value does depend on the target as the acceptance rate in the rejection sampling varies for different targets. In practice, if the target is low-dimensional, to get an accepted sample in the rejection sampling step is not very costly, and it is easy to achieve even under minimal computational resources. In our $2$D experiments we set the threshold between $100$ and $10K$ and found that this threshold is generally not reached if it is set to $10K$. To provide evidence for this, we generate $1000$ trajectories using ZOD-MC, and compute the average number of acceptance within the threshold $10K$, at different time points. We see that even at large values of $t$ we still accept at least one sample in average.
| $t_0$ | 5.00000 | 4.28303 | 3.56606 | 2.84909 | 2.13212 | 1.41515 | 0.69818 | 0.30653 | 0.13458 | 0.02594 | 0.01139 | 0.00755 |
|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|
| GMM | 1.58 | 1.39 | 3.70 | 14.03 | 48.27 | 129.05 | 308.23 | 786.74 | 1299.30 | 2032.39 | 2251.65 | 2316.49 |
| Mueller | 1.01 | 1.20 | 2.62 | 9.79 | 37.25 | 145.42 | 447.96 | 945.72 | 1598.99 | 2786.56 | 3129.97 | 3257.38 |
>>If this maximum number is reached, you explain that the score estimator is fixed to $-x/(1-e^{-2t})$. In this case, the error at time $t$ with the true score for large $t$ is given by $\exp(-t) | E_{\pi} [X] |$. With $T=\Theta(\log d/\varepsilon)$, this leads to a score error of order $\varepsilon |E_{\pi}[X]|/d$. In the case where $|E_{\pi}[X]| >> 0$, may this estimation induce issues ?
Thanks for leading us to see where the confusion arises. To answer this question, a first reminder is that it is a **rare** event that $-x/(1-e^{-2t})$ is accepted after the maximum number of rejections is reached. Therefore, the score error at time $t$ is **not** of order $\varepsilon | E_{\pi}[X]|/d$. If the rare event actually happens at a large time $t$, even though it may cause a large score error at time $t$, **its contribution to the overall score error along the trajectory is still small**. As shown in Theorem 1, the score estimation error (term II) is a weighted average of score error at each $t$ with small weights being the step-size. To see this, let's think $t=T$ and take a closer look at the step-size we impose in the proof of Corollary 3.1: the weighted score error at $T$ is upper bounded by $\min ( \varepsilon^{1/2}(d+m_2^2)^{-1/2},\varepsilon d^{-1} )\varepsilon d^{-1}|E_{\pi}[X]|$ (up to polylog factors), which is of order $O(\varepsilon^{3/2}d^{-1})$ since $|E_{\pi}[X]|\le E_{\pi}[|X|^2]^{1/2}:=m_2$.
---
Reply to Comment 3.1.3:
Comment: >You explain the hyperparameters of ZOD-MC are given by the theoretical results. Then, could you detail the analytical formulas of $T,N,\gamma$ used in the experiments depending on the target distribution and how there are set numerically (I guess depending on a certain value of $\varepsilon$) ?
The analytical formulas for $T,N,\gamma_k$ are provided in Corollary 3.1 as well as its proof. Explicit formulas are $T=\frac{1}{2}\log(\frac{d+\mathrm{m}_2^2}{\varepsilon})$, $N=\Theta\big(\max( \frac{(d+\mathrm{m}_2^2)^{\frac{1}{2}}(T+\log(\delta^{-1}))^{\frac{3}{2}}}{\varepsilon^{\frac{1}{2}}}, \frac{d(T+\log(\delta^{-1}))^2}{\varepsilon} )\big)$ with $\delta=\Theta\big( \min( \frac{\varepsilon^2}{d}, \frac{\varepsilon}{\mathrm{m}_2}) \big)$, and step-size $\gamma_k=\kappa\min(1,T-t_k)$ with $\kappa=\Theta\big(\frac{T+\log\delta^{-1}}{N}\big)$.
Numerically, we choose $T$ and $N$ based on two criteria: (1) $N,T$ satisfy the analytical formulas with small $\varepsilon$ (KL error) (2) $N,T$ match the EDM framework [2], which was shown to be a good tradeoff between sample quality and computation cost. Based on the above criteria, we choose different values of $T$ ($T=2,5,10$) and $N$ ($N=25,50$) in different experiements. Regarding the step-size, we set the exponential-decay $\gamma_k$ according to our analytical formula, with $\kappa = 1.6 ( \exp\left(\frac{\log T + \log \delta^{-1}}{N} \right) - 1)$ which is of the same order as $\kappa$ in our analytical formula for the range of values we consider.
[1] Louis Grenioux et al. "Stochastic Localization via Iterative Posterior Sampling." ICML'24.
[2] Tero Karras et al. "Elucidating the design space of diffusion-based generative models." NeurIPS'22.
---
Rebuttal 4:
Comment: Thank you for giving the practical guidelines to set the hyperparameters of ZOD-MC in practice. Once again, I think it is of high interest to enable the use of this algorithm by practitioners.
Overall, I really want to thank again the authors for giving precise answers to my several questions, and bringing lots of additional details on the way ZOD-MC works. I would like to sum up those points:
- I think that both the authors and I agree that SLIPS should be added to the related work and to the numerical experiments with precise tuning of its critical hyperparameter (although it does not seem obvious as explained above). I think we also agree that SLIPS and ZOD-MC have complementary strengths and weaknesses (they could be compared in the manuscript), and none of them should be said to be the best in an absolute way. I think we finally agree that AIS and SMC should be added to the competing algorithms, once again with careful tuning of their hyperparameters.
- I now understand better the low importance of the score estimation for large $t$ (which I did not find so well emphasized in the submission) and apologize to the authors if I was, at first sight, misconfused about this part. I really think that the authors should add remarks on this specific part to avoid confusion among the readers.
- On the methoddological side: I encourage the authors to include in the revised version of the manuscript the guidelines detailed in their rebuttal and their following comments (and for instance, also include the ablation studies presented above): namely, the threshold on the number of rejections, the values of the hyperparameters...
- Last but not least: put more emphasis on the fact that ZOD-MC requires the use of 1st/2nd order methods to find the modes of the target distribution in order to have an adapted proposal distribution in the rejection sampling phase (without it, ZOD-MC could not work, it is crucial). This setting is not problematic (it is quite a standard assumption in a variety of sampling methods), but as such, the title and the abstract of the submission suggest that the proposed methodology is purely zero-th order...
I will wait the reply of the authors to my last comments to decide the change of my score. Thank you once again for the fruitful discussion !
---
Rebuttal 5:
Comment: We sincerely thank the reviewer again, for continuously engaging with us and helpful suggestions!
>Thank you once again for the precision, it makes more sense to me ! I am still a bit confused about the gap between the initialization with variance $5$ (given in the rebuttal) and the initialization with variance $\sigma^2t_0=8.75$ (given in your last reply). Could you explain it ?
We apologize for creating a confusion fully due to us (it has been a very long 2 weeks) and thank you very much for catching it. It was a typo in our rebuttal that the variance of the initialization was stated to be $5$. What we really meant was $\sigma=5$. We did use $\sigma=5,t_0=0.35$, which led to an initialization with variance $8.75$. Results and values stated in our last reply remain correct, and we will still use $t_0=0.001$ and $\sigma=5$ in our revision, leading to $W_2=5.108$ for SLIPS.
> * I think that both the authors and I agree that SLIPS should be added to the related work and to the numerical experiments with precise tuning of its critical hyperparameter (although it does not seem obvious as explained above). I think we also agree that SLIPS and ZOD-MC have complementary strengths and weaknesses (they could be compared in the manuscript), and none of them should be said to be the best in an absolute way. I think we finally agree that AIS and SMC should be added to the competing algorithms, once again with careful tuning of their hyperparameters.
> * I now understand better the low importance of the score estimation for large $t$ (which I did not find so well emphasized in the submission) and apologize to the authors if I was, at first sight, misconfused about this part. I really think that the authors should add remarks on this specific part to avoid confusion among the readers.
> * On the methoddological side: I encourage the authors to include in the revised version of the manuscript the guidelines detailed in their rebuttal and their following comments (and for instance, also include the ablation studies presented above): namely, the threshold on the number of rejections, the values of the hyperparameters...
We appreciate the reviewer for summarizing the fruitful discussion, and agree with the summary. We truly believe that adding the details discussed in the rebuttal (the comparison to SLIPS, AIS and SMC, more remarks on ZOD-MC and the guidelines) will make the ZOD-MC work more comprehensive and improve the clarity and accessibility to broader audience. We will incorporate these details in the updated manuscript.
---
Rebuttal 6:
Comment: Thank you for your kind response. Given all the elements discussed above, I have raised my score from 3 to 5.
Note: I have edited my previous comment to add a last remark on the use of 1st/2nd order methods, but it does not change my final decision. | Summary: This paper proposes a novel zero-order sampling algorithm (ZOD-MC) when the target distribution is beyond the log-concavity and even isoperimetry. Different from first-order diffusion-based Monte Carlo proposed previously, this paper only requires the zero-order information. Besides, this paper shows the good performance of ZOD-MC from both theoretical and practical perspectives.
This paper's key step is to introduce rejection sampling to implement the score estimation in a reverse OU process. Specifically, the authors note that the envelope for the rejection sampling is a Gaussian-type distribution when the minimum negative log density of the target distribution is given as $V_*$.
Strengths: - This is the first diffusion-based Monte Carlo method which only requires the bounded second moment and a relaxation of the commonly used gradient-Lipschitz condition. Such assumptions are even much weaker than the first-order diffusion-based Monte Carlo, e.g., RDMC and RSDMC.
- Although zeroth-order sampling methods cannot circumvent the curse of dimensionality, this paper gets rid of the exponential dependence on error tolerance, i.e., $\epsilon$. Combined with nearly no smoothness requirements in this paper, such complexity is acceptable.
- This paper greatly simplifies the proof techniques used to estimate upper-bound score estimation errors, making them easy for readers to follow. Specifically, previous work introduces complicated concentration properties to provide the error bound. While this paper replace it with some monotonicity (Proposition B.2) used in stochastic localization.
Weaknesses: - Some related work is missed in the authors’ survey, e.g., [1]. Theorem 7 of [1] provides a similar zeroth-order complexity ($\exp(O(d)\log \epsilon^{-1})$) for achieving a minimal optimal error. I suggest the authors compare the theoretical results with [1].
- The implementation of Algorithm 3 requires the minimum of $V$, which is denoted as $V_*$. How do we obtain this minimum? Since the function $V$ is highly irregular, calculating the minimum may also be difficult.
- Although the experiments seem to be comprehensive, some details are not reported. For example, what is the meaning of oracle complexity? Does it include only zeroth-order oracle, first-order oracle, or both? If only counting the first-order oracle, is the comparison fair to first-order methods? What results are if the x-axis is set as the wall clock time? What are the hyper-parameters of ZOD-MC and baselines, and how do we choose these hyper-parameters? I hope these details will be included in the appendix.
[1] Convergence Rates for Non-log-concave Sampling and Log-partition Estimation.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In Corollary 3.1, we can find the zeroth-order query complexity will depend on $\epsilon_{\mathrm{KL}}^{-d}$. But I would be curious to know if we can achieve a complexity whose order of $\epsilon$ is independent with $d$? What are the barriers to obtaining this complexity?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This is a theoretical paper. It can hardly find potential negative societal impact in their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their valuable advice and comments.
> Some related work is missed in the authors’ survey, e.g., [1]. Theorem 7 of [1] provides a similar zeroth-order complexity O(exp(d)log(1/\varepsilon)) for achieving a minimal optimal error. I suggest the authors compare the theoretical results with [1].
Thanks for mentioning this very interesting work. It is related to the zeroth-order sampling problem that we considered, and we will add the following discussion in our updated manuscript:
[1] provides upper bounds on convergence rates for a zeroth-order sampling algorithm that combines an approximation technique and rejection sampling. We compare their results to ours from the following two perspectives:
1) target distributions: the target distributions in [1] are restricted to have compact support and to be m-differentiable ($m>0$). Our considered target distributions are more general. For example, they can have full support and they can even have discontinuities as shown in our Section 4 Figure 5.
2) complexity bounds: we can compare our derived complexity upper bound to the results in [1] for target distributions in the restricted class considered in [1]. By using the fact that KL is smaller than $R_\infty$, theorem 12 in [1] can be written as an upper bound on the complexity to reach $\varepsilon$-accuracy in $R_\infty$: the complexity is of order $\Omega_{d}(\varepsilon^{-d/m})$. The $d$-dependency is implicit, therefore we only compare the $\varepsilon$-dependency. The $\varepsilon$-dependency in both complexities is polynomial in $\varepsilon$, with a linear d factor in the exponent. Since we consider a general class of targets, our result doesn't reflect the smoothness of the potential as their result does. Within the restricted class, if the target is differentiable up to an order greater than 2, their result is better than ours (in terms of $\varepsilon$-dependency). Otherwise, our result is better.
> The implementation of Algorithm 3 requires the minimum of $V$, which is denoted as $V_*$. How do we obtain this minimum? Since the function is highly irregular, calculating the minimum may also be difficult.
This is an oracle that ZOD-MC relies on. What we need can be relaxed to a global lower bound of $V$. Nevertheless, we agree this oracle can still be challenging to implement. Our practical implementation was mentioned in Remark 1, where we get an approximation using Newton's method (although other methods like Gradient Descent or the proximal bundle can also provide good approximations). As the sampling process explores the space more, the estimated minimum is updated as necessary.
> Although the experiments seem to be comprehensive, some details are not reported. For example, what is the meaning of oracle complexity? Does it include only zeroth-order oracle, first-order oracle, or both? If only counting the first-order oracle, is the comparison fair to first-order methods? What results are if the x-axis is set as the wall clock time? What are the hyper-parameters of ZOD-MC and baselines, and how do we choose these hyper-parameters? I hope these details will be included in the appendix.
Thanks for a great question. Our claim is at least fair and in fact an understatement -- As mentioned in line 274 we count the total number of first and zeroth-order queries. We count each of them as being the same. This actually puts our method at a disadvantage since evaluating first-order queries is more expensive than zeroth-order queries. Despite this, we still see improved counts. The only exception to this is in the dimension experiment (Figure 1b in the main paper) where we matched number of function evaluations. For instance, a first-order query requires $d$ evaluations. We include a figure displaying the clock time as a function of the oracle complexity under the same set up as that of Figure 1a in the main paper. As introduced at the beginning of Section 4, the baselines we considered includes RDMC, RSDMC, Proximal sampler, Parallel Tempering and unadjusted Langevin Monte Carlo. The hyperparameters N,T and the step-size are analytically explored and we chose them according to Corollary 3.1. We will highlight these information in the updated manuscripts.
>In Corollary 3.1, we can find the zeroth-order query complexity will depend on $\varepsilon_{\text{KL}}^{-d}$. But I would be curious to know if we can achieve a complexity whose order of $\varepsilon$ is independent with $d$? What are the barriers to obtaining this complexity?
The exponent $d$ in the $\varepsilon$-dependency is due to the curse of dimensionality in the rejection sampler: when we use the current version of rejection sampler to sampler $p_{0|t}$, we are only able to prove that the expected number of rejections is of order $O(\exp(2dt))$. Since the largest $t$ in the denoising process is of order $\log(d/\varepsilon_{\text{KL}})$, we end up with the $\varepsilon_{\text{KL}}^{-d}$ complexity. To improve the $\varepsilon$-dependency requires a finer design of the rejection sampler with higher acceptance rate. We believe this is an interesting future work, and theoretically it could be done by considering non-logconcave targets with special structures.
All these scientific discussions are helpful and greatly appreciated. At this moment our method's strength still lies in being agnostic to multimodality in low (e.g., $\lesssim$ 10) dimensions.
[1]: Holzmüller, David, and Francis Bach. "Convergence rates for non-log-concave sampling and log-partition estimation." arXiv preprint arXiv:2303.03237 (2023).
---
Rebuttal Comment 1.1:
Title: Response to authors' rebuttal
Comment: Dear authors,
Thank you so much for your detailed response, which has addressed most of my questions. I have raised the rating to 6.
Best regards,
Reviewer 8qyg | Summary: This paper considers the problem of sampling from an unnormalized density by combining techniques developed from score-based generative modeling and non-log-concave sampling. Specifically, based on the Reverse Diffusion Monte Carlo (RDMC) framework proposed in [1], which is a meta-algorithm based on an oracle that estimates score function at any time, the authors proposed one way to implement such oracle via techniques developed for sampling non-log-concave distributions (Alternating Sampling Framework, where the Restricted Gaussian Oracle is implemented via Rejection Sampling) and arrived at an implementable and derivative-free version of RDMC.
Strengths: This is a technically solid paper with rigorous proof. Also, a complete set of numerical experiments are included to justify the main claims. Furthermore, this paper is the first work turning RDMC into an implementable sampler, which might be useful for sampling real-world distributions. Moreover, for the theory part of this paper, no strong assumption like log-concavity or isoperimetric inequality are made in the proof.
Weaknesses: 1. One possible drawback of the proposed sampler, just as the authors stated in the paper, is that its iteration complexity depends linearly on the data dimension $d$. Hence, it will probably be appealing only for low-dimensional distributions. Therefore, it might be meaningful to do a numerical investigation to see how the sampler proposed in this paper behaves on high dimensional distributions.
2. For the sake of completeness, it might be necessary for the authors to include a section of related work on derivative-free methods for optimization and sampling from unnormalized densities. Some classical examples include the Ensemble Kalman Filter (EnKF) and Ensemble Kalman Inversion (EKI).
Technical Quality: 3
Clarity: 3
Questions for Authors: The reviewer's main concern is that when using rejection sampling to implement the Restricted Gaussian Oracle (RGO) in the Alternating Sampling Framework (ASF), one can only obtain the expected number of executions for rejection sampling (Proposition B.3). Therefore, it might be fair to compare the expected time complexity of this sampler with the exact time complexity of other samplers listed in Table 1?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: It seems to be the reviewer that the name "Diffusion Monte Carlo" and its abbreviation "DMC" are not good choices for naming the sampling methodology here. They happen to coincide with the one of the quantum monte carlo methods developed in computational quantum physics. Maybe one possible way to resolve such issue is to use the name "Diffusion-Based Monte Carlo (DBMC)" or the name "Score-Based Monte Carlo (SBMC)" instead.
References:
[1] Huang, X., Dong, H., Yifan, H. A. O., Ma, Y., & Zhang, T. (2023, October). Reverse diffusion monte carlo. In The Twelfth International Conference on Learning Representations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their valuable advice and comments and greatly appreciate the positive evaluation.
> One possible drawback of the proposed sampler, just as the authors stated in the paper, is that its iteration complexity depends linearly on the data dimension. Hence, it will probably be appealing only for low-dimensional distributions. Therefore, it might be meaningful to do a numerical investigation to see how the sampler proposed in this paper behaves on high dimensional distributions.
We thank the reviewer for this suggestion. In Figure 1b of the main paper, numerically we've shown that the sample quality of ZOD-MC decreases a little as the dimension increases from 1 to 7. But ZOD-MC still generates relative high-quality samples and outperforms RDMC and RSDMC at a fixed number of function evaluations.
>For the sake of completeness, it might be necessary for the authors to include a section of related work on derivative-free methods for optimization and sampling from unnormalized densities. Some classical examples include the Ensemble Kalman Filter (EnKF) and Ensemble Kalman Inversion (EKI).
Thanks for the suggestion. The Ensemble Kalman Filter (EnKF), Ensemble Kalman Inversion (EKI) and the Ensemble Kalman Sampler (EKS) are all derivative-free sampling algorithms that are based on moving a set of easy-to-sample particle according to certain dynamics. However, these methods seem to be mainly for data assimilation / Bayesian posterior sampling, which require (noisy) observations from the target distributions. That is different from our setting: sampling using queries on the target potential functions only. That's why we did not compare to them.
However, we will add a discussion on these important related algorithms.
>The reviewer's main concern is that when using rejection sampling to implement the Restricted Gaussian Oracle (RGO) in the Alternating Sampling Framework (ASF), one can only obtain the expected number of executions for rejection sampling (Proposition B.3). Therefore, it might be fair to compare the expected time complexity of this sampler with the exact time complexity of other samplers listed in Table 1?
We appreciate the question on the comparison in Table 1. We used zeroth-order oracle complexities in expectation for ZOD-MC in Table 1. We believe the comparison is fair due to the following two reasons:
(1) lots of tasks, such as volume computing and Bayesian inference, require a large number of samples from the target distribution. Hence, we need to run the sampling algorithm many times. As the number of algorithm running gets large, the total oracle complexity will be approximately the total expected oracle complexity. From these perspective, the comparison in table 1 is fair. Similar comparison has been made in [1] to compare the complexity of the proximal sampler to the complexities of LMC and MALA.
(2) it is worth mentioning the complexities of RDMC and RSDMC are not exact either. The complexties in RDMC (Theorem 1) and RSDMC (Theorem 4.1) are proved only to ensure an $\varepsilon$-accurate sample with high probability, which means the complexities to obtain an $\varepsilon$-accurate sample almost surely for RDMC and RSDMC should be larger than the corresponding complexities presented in table 1.
[1] Chen, Yongxin, et al. "Improved analysis for a proximal algorithm for sampling." Conference on Learning Theory. PMLR, 2022.
---
Rebuttal Comment 1.1:
Title: Response to authors' rebuttal
Comment: Dear authors,
Thank you so much for your detailed response, which have addressed most of my questions. Would it be possible for you to comment on the naming of the methods (question raised in the Limitations section of my review) here? Thanks in advance!
Best regards,
Reviewer 9Wcn
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 9Wcn's Response to authors' rebuttal
Comment: Dear Reviewer,
Thank you very much for reminding us of a very good point. Yes, you are right, DMC is not a good abbreviation. We will revise and no longer call the general framework DMC.
Best wishes,
Authors | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for the helpful comments. Results of newly added experiments are included in the pdf file.
Pdf: /pdf/6f7a8c16bb61fc0e4cfb43d74c7fbdbda4bbe6f8.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Lorentz-Equivariant Geometric Algebra Transformers for High-Energy Physics | Accept (poster) | Summary: This paper implements a Lorentz-equivariant transformer and applies it to several problems in particle physics. The main contribution of the paper is not the definition of a conceptually different model. At least it doesn't claim to define a model significantly different from previously proposed models. But it provides a software library (under BSD-3-Clause-Clear license) and a nice set of numerical experiments inspired by physics problems. This is a nice contribution to the community.
Strengths: The paper combines interesting ideas (imposing symmetries arising from physics using invariant theory and clifford algebras) with state-of-the-art machine learning models (transformers, Riemannian flow maching generative modeling). It comes with a software package and multiple experiments where several methods are compared. I see this as a tool that may be used by several members of the AI4science and AI4physics communities.
Weaknesses: The paper is not self-contained. In order to be a valuable tool for our community it would benefit from a more comprehensive explanation of several of its technical aspects.
- The implemented Lorentz equivariant model (what are the inputs?, are they a set of particles? how are they matched to queries, values and keys?)
- The flow-matching approach. Again, what are the inputs, outputs? How's the model defined? What does it produce? The closest thing to an explanation is in page 24, in the context of describing the experiment, but still I don't find this explanation sufficient to understand the model conceptually.
- What are the scientific goals of the experiments defined in the paper? (In the context of fundamental research in high energy physics).
Technical Quality: 3
Clarity: 3
Questions for Authors: Please address the questions in the weaknesses section.
Having these clearly answered would improve the paper's clarity, reproducibility, and usability.
I'd be willing to raise my score to 7-8 if all these aspects were clearly explained with concrete explanations and equations, since I believe this can turn into a great tool to the AI 4 physics community.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thorough and constructive review. We are particularly happy that you appreciate the open-source release of our library. Thanks as well for the questions and criticisms, which we address in the following.
> The paper is not self-contained.
Thank you for this feedback. Due to the fact that we had three diverse experiments, we had to put many experimental details in the appendix. We will use the extra page in the final version to expand in the main paper our description of architecture, flow-matching approach, and the experiments.
> The implemented Lorentz equivariant model (what are the inputs?, are they a set of particles? how are they matched to queries, values and keys?)
The inputs to our architecture are indeed a set of particles. Each particle is characterized by a) an energy and a momentum, which are concatenated into a "four-momentum" vector, and b) a discrete type; see the beginning of Sec. 2. As we describe in the beginning of Sec. 3.1, the four-vector is embedded into the vector grade of the multivector, and the other grades are zero-padded. The type is represented with a standard one-hot encoding.
This data is then processed with the L-GATr architecture defined in line 177. It consists of a sequence of layers of different types: layer normalization, the construction of keys, queries, and values with linear layers, an attention mechanism, geometric products, gated nonlinearities, and skip connections. We define all these maps on pages 4 and 5.
> The flow-matching approach. Again, what are the inputs, outputs? How's the model defined? What does it produce?
The key component here is a network $v_t(x)$ that takes as inputs a set of particles $x$ as well as a time variable $t$ and outputs one number for each element of the input particle properties. As an example, consider our $t\bar{t}+0j$ experiment, where we generate 6 particles, each characterized by the 4 properties $y_p, y_m, \phi, \eta$ defined in Eq. (4). The L-GATr vector field model embeds them into 6 tokens, each containing a multivector with the 4 properties as well as particle type and symmetry breaking information. The architecture processes this data and outputs a single multivector for each item, and we extract the vector field required for flow matching from the vector component of this multivector. For each of the 6 particles, this vector field contains 4 values corresponding to the 4 properties. For details, see the Models section in Appendix C.3.
To sample with such a model, one first samples initial particle properties from a base distribution (like a multivariate Gaussian in a suitable basis) and calls this $x(1)$. Then one updates the particle properties $x(t)$ by integrating the differential equation $d/dt x(t) = v_t(x)$ from $t = 1$ to $t = 0$ using an off-the-shelf ODE solver. The final values $x(0)$ are the outputs: a set of particle properties sampled from the model. By repeating this process, we can generate different samples of sets of particles.
To train the model, we use the conditional flow matching loss defined in line 122. Intuitively, it trains the network $v_t(x)$ such that the vector field it defines moves particles on shortest paths from the base density to the data density. This is explained in much more detail in the flow-matching paper [1].
> What are the scientific goals of the experiments defined in the paper?
On a high level, all three experiments are part of the data-analysis pipeline at the LHC. We roughly sketch how they fit into this bigger picture in Fig. 1. The goal of any such analysis is either the measurement of parameters of fundamental theories of nature, such as the mass of a particle or the strength of an interaction between particles, or to discover or exclude hypotheses altogether, like in the discovery of the Higgs boson in 2012.
As a concrete example, consider the top-tagging problem of Sec. 4.2. This classification task is used to filter data collected at the Large Hadron Collider for those collisions that are likely to be the result of a top quark decay. Top quarks are the heaviest fundamental particle currently known. We are interested in selecting data that involves their decays because the production and decay probabilities of top quarks depend on whether certain "supersymmetric" particles exist. Analyzing top-decay events will therefore allow us to discover or exclude the existence of supersymmetric particles, but to do that well, we need to be able to filter them precisely, and here we show that L-GATr is a powerful tool for that task.
Now, why do we care about the existence of supersymmetric particles? If they exist, they would help us answer the "hierarchy problem", one of the biggest open questions in fundamental physics. It can be phrased as "Why is the gravitational force so much weaker than the electromagnetic, weak, and strong forces?". The existence of supersymmetric particles would also be relevant for a number of other scientific questions, including the nature of Dark Matter. Apart from supersymmetry, there are more hypotheses for new physics that benefit from confident top-taggers. All in all, if L-GATr can improve the performance on the top tagging task, it may contribute to a better, clearer, or sooner answer to such fundamental physics questions.
We hope that we were able to answer your questions and look forward to discussing further.
**References:**
- [1] Y. Lipman et al, "Flow matching for generative modelling", ICLR 2023
---
Rebuttal 2:
Title: Thank you for the answers
Comment: I appreciate the explanations. I suggest adding them to the paper or appendix. I increased the score.
---
Rebuttal Comment 2.1:
Comment: Thank you for the fast response. We are happy to hear you appreciated these explanations, and we'll make sure to add them to the paper. | Summary: The paper proposes a Lorentz equivariant transformer (L-GATr) based on geometric algebra for high energy physics. It generalizes the Geometric Algebra Transformer (GATr) from $E(3)$ equivariance to the Lorentz group. The proposed transformer is then developed into a generative model based on Riemannian flow matching for particle data. L-GATr is evaluated on several high energy physics tasks, including quantum field theory amplitude surrogates, top tagging, and generative modeling for event reconstruction.
Strengths: 1. The paper is well-written and easy to follow. In addition, the problem is well-motivated.
2. The proposed L-GATr generalizes GATr from $E(3)$ to the Lorentz group.
3. As stated by the authors, the Lorentz-equivariant flow matching proposed in section 3.2 is the first generative model proposed for particle physics.
4. Compared to graph-based Lorentz equivariant networks, the transformer architecture is more efficient and scalable.
5. The proposed method is shown to be more data efficient than the baselines in both amplitude surrogates and generative modeling experiments.
6. The ability to scale with the data is also verified in several experiments.
7. The benefits of Riemannian flow matching compared to the Euclidean version are demonstrated in the experiments.
Weaknesses: 1. It is a bit unclear to me what has been modified for Lorentz equivariance in the transformer framework. Specifically, (1) and (2) look the same as (4) and (5) in [13]. It’ll be great if the authors can state the changes explicitly.
2. The current presentation of the experiments can be a bit hard to understand for people without a physics background. Adding some basic introduction to the problems can strengthen the paper.
3. In the top tagging experiment, the performance of the proposed method is marginally worse than the baseline method.
4. Although the proposed method is claimed to support symmetry-breaking data, its effect is not well studied in the experiments.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What are $y_m, y_p, \eta$, and $\phi$ in (4)?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As mentioned in the paper, the proposed L-GATr has additional computational overhead compared to traditional transformers. Secondly, even though the framework allows additional inputs to address symmetry breaking issues, the effect of such an approach is not well studied. It is unclear how well the proposed method can handle symmetry breaking inputs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thorough and constructive review. We are glad to hear that you appreciated how we generalized the GATr architecture from E(3) to the Lorentz symmetry and the development of the first Lorentz-equivariant architecture. We were particularly happy that you found the paper easy to follow. Thanks as well for the questions and criticisms, which we address one by one.
> It is a bit unclear to me what has been modified for Lorentz equivariance in the transformer framework. Specifically, (1) and (2) look the same as (4) and (5) in [1].
Indeed, the form of these equations and generally the overall architecture design are similar. There are several differences though:
- The multivectors $x$ in our Eq. (1) and Ref. [1]'s Eq. (4) are elements of different geometric algebras with different metrics.
- The inner product in our Eq. (2) is therefore also different from the one in Ref. [1].
- The second term in our Eq. (1) and Ref. [1]'s Eq. (4) is different. In both cases, we have a term for the five grade projections, and we can multiply with a multivector that is invariant to the group. For Lorentz equivariance, this invariant multivector is the pseudoscalar $e_{0123}$. In the $E(3)$ case, this invariant multivector is the vector $e_0$.
- The geometric product used in the MLP is different between both architectures. This is another consequence of using a different metric.
- L-GATr uses only the geometric product as a bilinear operation in the MLP, while the original GATr had to concatenate that with an equivariant join operation to ensure expressivity. The technical reasons for this are discussed in the appendices of Ref. [1].
- The layer normalization in Eq. (3) needs to be different from Ref. [1]'s layer normalization. The reason is that the Minkowski norm of special relativity can be negative and the normalization operation has to be robust against that.
> The current presentation of the experiments can be a bit hard to understand for people without a physics background.
Thank you for this feedback. It is important to us that both machine learners and particle physicists can understand the paper. We will use the extra page in the final version to substantially expand the introduction to particle physics and to the concrete experiments.
> In the top tagging experiment, the performance of the proposed method is marginally worse than the baseline method.
It's true we do not outperform the strongest baselines on this task (unlike on the other two tasks). We consider our performance and that of the strongest models in the literature to be on par for all practical purposes. In terms of accuracy, the strongest model and L-GATr differ only by 0.09 percentage points, and on one of the four established metrics L-GATr achieves the best score to date.
> Although the proposed method is claimed to support symmetry-breaking data, its effect is not well studied in the experiments.
Actually, in two of our three experiments we use symmetry-breaking data: the direction of the particle beam is provided as an input in the top-tagging and generative modelling problems. This is important, because the detector measurements depend on this direction. (In contrast, the amplitude regression task does not include detector effects, so additional symmetry-breaking inputs are neither needed nor provided there.)
To test the relevance of these symmetry-breaking inputs, we re-ran the top-tagging experiment for a version of L-GATr *without* the beam direction as an input. We found that this decreased the model's performance as follows:
Model | Accuracy | AUC | $1/\epsilon_B$ ($\epsilon_S = 0.5$) | $1/\epsilon_B$ ($\epsilon_S = 0.3$)
--- | --- | --- | --- | ---
L-GATr (original) | 0.9417 | 0.9868 | 548 | 2148
L-GATr (no beam direction) | 0.9403 | 0.9860 | 440 | 1840
The beam direction is thus important to analyze data that involve detector effects.
> What are $y_m$, $y_p$, $\eta$, and $\phi$ in (4)?
These four variables form an alternative basis for the particle four-momenta that is better aligned with the physically relevant properties of a particle in the context of a collider experiment: $\eta$ and $\phi$ represent the angle in which a particle is moving, $y_p$ is a measure of the speed with which it moves away from the collision, and $y_m$ is related to its mass. A convenient property for flow matching is that the support of the distribution is convex in this space. The dictionary that maps $(y_m, y_p, \eta, \phi)$ to four-momentum $(E, p_x, p_y, p_z)$ is given in Eq. (4) and just after it. We will add the inverse map from four-momenta to this alternative basis and improve the discussion.
> the proposed L-GATr has additional computational overhead compared to traditional transformers.
It's true that L-GATr has some computational overhead over a standard transformer architecture, but we would like to stress that it is managable for almost all particle-physics applications.
In Appendix C.4 of our paper we show timing measurements. A forward pass through the tested model takes between 2ms and 100ms when up to 40k particles are processed. This is fast enough for almost all steps of the data-analysis pipeline and in fact negligible compared to the simulators. (There are a few exceptions that require faster inference, most notably the trigger, but these specialized cases go beyond the scope of the paper.)
Compared to a standard transformer, L-GATr is slower for small numbers of particles. However, it is comparably fast when the number of particles is large, since then the attention operation is the bottleneck for both. Compared to a message-passing network, a popular type of equivariant architecture, L-GATr is consistently faster and scales much better to large numbers of particles.
We hope that we were able to answer your questions and look forward to discussing further.
**References**:
- [1] J. Brehmer et al, "Geometric Algebra Transformer", NeurIPS 2023
---
Rebuttal Comment 1.1:
Comment: Thank you for explaining the differences between the proposed work and [1]. It's clear to me now the difference between the two. It is also great to hear the authors would provide some background materials for people without physics backgrounds.
However, some of the weaknesses remain. The beam direction doesn't seem to significantly impact the network performance in the top-tagging experiment (0.9403 v.s. 0.9417). As a result, I'm still not convinced how well the network can deal with actual symmetry-breaking scenes and to what extent the network can handle symmetry-breaking. Therefore, I keep my rating unchanged.
---
Rebuttal 2:
Comment: Thank you for the response. We are glad we were able to explain the novelties in our architecture better.
As for the effect of symmetry breaking, we tend to disagree that the difference between an accuracy of 0.9403 and 0.9417 is insignificant on this benchmark. This difference is similarly big to the gap between equivariant networks and the top-performing non-equivariant networks (ParticleNet, ParT) outlined in Table 1.
To make the relevance of symmetry-breaking inputs clearer, we also re-ran the generative modelling experiment without them. Because of limited time, we ran shorter experiments (50% of the training time quoted in the paper). This is what we find:
Model | NLL (↓) | AUC (↓)
--- | --- | ---
L-GATr (original) | **-32.5** | **0.59**
L-GATr (no beam direction) | -18.4 | 0.99
So symmetry-breaking inputs are crucial for this experiment. This stark difference is not surprising. Consider the AUC metric, in which we train a classifier to distinguish between samples generated by the model and test data. For a generative model with unbroken symmetry, the distribution is invariant under rotations: it generates samples with particles moving in any direction with equal likelihood. This is very different from the data distribution, which is not invariant under rotations: the beam direction leads to a strongly preferred direction for the produced particles. It is easy for a classifier to spot this difference, leading to the high AUC. A similar argument can be made for the log likelihood. | Summary: The paper proposes the Lorentz Geometric Algebra Transformer (L-GATr) for high-energy physics tasks. This model extends the Geometric Algebra Transformer by incorporating relativistic considerations. Specifically, L-GATr supports partial and approximate symmetry for symmetry-breaking inputs and is applied to generative models.
Strengths: - The motivation of the paper is strong, addressing an important application of equivariance models. While symmetries are prevalent in high-energy physics, there are few applications of equivariance models in this field.
- The paper clearly distinguishes their work from existing research, emphasizing the significance of their contributions.
- They connect the proposed model to the generative framework, making it potentially applicable to a broader range of tasks.
Weaknesses: - The model's performance in Table 1 is not optimal.
- The experiments on generative modeling lack comparisons with other equivariant models or SOTA models.
- The scalability of the model remains limited, which reduces its suitability for high-energy physics applications.
Technical Quality: 3
Clarity: 4
Questions for Authors: I think another interesting comparison would be to evaluate the proposed model against traditional sampling methods, especially if the goal is to design machine learning models that are more efficient than traditional methods (within an acceptable error tolerance). Could the paper include this comparison? I believe a positive result could attract attention from researchers in the high-energy physics community.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The efficacy and scalability of the proposed framework may not yet suffice to replace traditional methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thorough and constructive review. We are glad that you liked the motivation of our work, found our contributions significant, and appreciated the generative modelling part. Thanks as well for the questions and criticisms, which we address in the following.
> The model's performance in Table 1 is not optimal
It's true we do not outperform the strongest baselines on this task (unlike on the other two tasks). We consider our performance and that of the strongest models in the literature to be on par for all practical purposes. In terms of accuracy, the strongest model and L-GATr differ only by 0.09 percentage points, and on one of the four established metrics L-GATr achieves the best score to date.
> The experiments on generative modeling lack comparisons with other equivariant models or SOTA models.
We are not aware of any other Lorentz-equivariant generative models. If you know of any, we would appreciate if you could point us to them; we would be eager to add them to the comparison.
The transformer-based flow-matching model that we compare to is the strongest baseline we could come up with. Flow matching is establishing itself as a standard in high-energy physics. For instance, Buhmann et al. [1] compare the performance of GAN, diffusion, and flow-matching models based on the same architectural elements and find flow matching to work best.
That being said, we agree that we should compare to more diverse baselines, especially those with open-source implementations. We started experiments with SO(3)-equivariant architectures and non-equivariant autoregressive transformers [2] and will include them in the final version. Thank you for the suggestion!
> The scalability of the model remains limited. [...] The efficacy and scalability of the proposed framework may not yet suffice to replace traditional methods.
Could you elaborate what you mean here? In terms of scaling the model size, we used L-GATr models with millions of parameters without issues.
In terms of scaling to large training data sets, we see in Fig. 3 that L-GATr benefits from that more than all baselines.
The scaling to large number of particles (tokens) is studied in Appendix C.4. A forward pass through the tested L-GATr model takes between 2ms and 100ms when up to 40k particles are processed. That is fast enough for almost all steps of the particle-physics data-analysis pipeline and in fact negligible compared to the simulators. (There are a few exceptions that require faster inference, most notably the trigger, but these specialized cases go beyond the scope of the paper.)
> evaluate the proposed model against traditional sampling methods
If we understand this suggestion correctly, we are already doing this: we evaluate our generative model by comparing samples from it to samples from the simulator, which are based on traditional Monte-Carlo sampling. We quantify the difference between these two distributions by comparing marginal distributions, evaluating the log likelihood, and through a classifier two-sample test. (Unfortunately, the density of the traditional samples is not tractable, which makes it difficult to define more objective metrics for the high-dimensional samples.)
To the extent that these metrics can measure it, we find that the L-GATr model gives us samples that are close to the traditional sampling method, while being orders of magnitude faster to generate: sampling from an L-GATr flow-matching model takes milliseconds per sample, from a fast classical sampler seconds per sample, and from the most accurate state-of-the-art samplers minutes per sample.
We hope that we were able to answer your questions and look forward to discussing further.
**References**
- [1] E. Buhmann et al, "EPiC-ly Fast Particle Cloud Generation with Flow-Matching and Diffusion", arXiv:2310.00049
- [2] A. Butter et al, "Jet Diffusion versus JetGPT - Modern Networks for the LHC", arXiv:2305.10475 | Summary: The authors propose an architecture for high-energy physics events – the Lorentz Geometric Algebra Transformer, which is equivariant under Lorentz transformations. The architecture is based on the Geometric Algebra Transformer architecture, and generalizes to relativistic scenarios and the Lorentz symmetry. The architecture is demonstrated on regression, classification and generation tasks in particle physics.
Strengths: This article is well-written and effectively communicates its ideas. While the novelty of the research may not be groundbreaking, it offers valuable contributions to the field. The authors conduct a sufficient number of experiments to test their proposed architecture.
Weaknesses: * The proposed architecture, while theoretically appealing, suffers from significant computational overhead due to the addition of Lorentz layers to the already resource-intensive Transformer model.
* The comparison of generative modeling capabilities is limited to flow models, which may not represent the full spectrum of possible approaches.
* The model's performance, while adequate, does not demonstrate a significant improvement over baseline models, which may be a concern given the added computational cost.
Technical Quality: 3
Clarity: 3
Questions for Authors: * The experiments are conducted on four vectors of the hard particles. In this case, is it really necessary to employ transformers since the input dimension is not as high? Especially for the regression task, the training set is a relatively small one, yet the model has around 2 million parameters.
* Why is a smaller model used for the top tagging task? Obviously, this task has higher dimensionality and larger datasets compared to the regression task.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors briefly discussed the limitations in the Discussion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thorough and constructive review. We are happy to hear that you found our architecture a valuable contribution and the paper well-written. Thanks as well for the questions and criticisms, which we address one by one.
> significant computational overhead
It's true that L-GATr has some computational overhead over a standard transformer architecture, but we find it sufficiently fast for all particle-physics applications we encountered.
In Appendix C.4 of our paper we show timing measurements. A forward pass through the tested model takes between 2ms and 100ms when up to 40k particles are processed. This is fast enough for almost all steps of the data-analysis pipeline and in fact negligible compared to the simulators. (There are a few exceptions that require faster inference, most notably the trigger, but these specialized cases go beyond the scope of the paper.)
Compared to a standard transformer, L-GATr is slower for small numbers of particles. However, it is comparably fast when the number of particles is large, since then the attention operation is the bottleneck for both. Compared to a message-passing network, a popular type of equivariant architecture, L-GATr is consistently faster and scales much better to large numbers of particles. This is consistent with the prior work GATr [1, see Fig. 5 there] from which L-GATr is derived, which shares performance characteristics with L-GATr.
> The comparison of generative modeling capabilities is limited to flow models
True. We initially focused on flow matching baselines because of the stable training, high-quality samples, and the (approximately) tractable likelihood function. Because of these advantages, flow matching is establishing itself as a standard in high-energy physics. For instance, Buhmann et al. [2] compare the performance of GAN, diffusion, and flow-matching models based on the same architectural elements and find flow matching to work best.
Thank you for the suggestion to expand this. We are working on generating additional baseline results for the generative modelling task, including an SO(3)-equivariant architecture and a non-equivariant autoregressive transformer model [3]. We will include the results and discuss them in the final version of the paper.
> The model's performance, while adequate, does not demonstrate a significant improvement over baseline models, which may be a concern given the added computational cost.
In particular in the amplitude regression task, we find L-GATr to consistently outperform the baselines. The advantage over the baselines gets larger for higher-multiplicity final states and more complex interactions. This regime of amplitude modelling is known to be the most challenging for neural surrogates, but is at the same time the most practically important, as theory computations can become prohibitively expensive there [4, 5]. We believe that L-GATr's performance improvements here can have substantial real-world impact.
> Is it really necessary to employ transformers since the input dimension is not as high?
It is not strictly necessary, but transformers lead to the best performance in our experiments. In Fig. 3, for instance, we show that a transformer outperforms a simple MLP on the amplitude regression task. Similarly, our L-GATr outperforms the GAP model, a Lorentz-equivariant MLP we constructed. This is in line with the larger trend in machine learning, which finds transformers to often outperform other models on a large variety of tasks.
> Why is a smaller model used for the top tagging task?
We tuned hyperparameters independently for each of the three problems. We interpret the differences in optimal model size as follows: the top-tagging problem takes many particles as inputs, but the task itself is a comparably simple binary classification problem. In contrast, amplitude regression requires learning complex functions of the particle momenta with high precision, especially for the higher multiplicities; these can be more accurately expressed with a bigger network.
To quantify how important the network capacity is for the amplitude regression problem, we compared our standard L-GATr models with 2 million parameters to a smaller version with 0.7 million parameters:
Model size | MSE ($Z + 1g$) [$10^{-7}$] | MSE ($Z + 4g$) [$10^{-5}$]
--- | --- | ---
2M | 1.91 | 1.48
0.7M | 1.62 | 2.71
This confirms that the capacity is more important for the more complex $Z + 4g$ process (though the smaller L-GATr model still outperforms all baselines there).
We hope we were able to address your questions and look forward to discussing further.
**References**
- [1] J. Brehmer et al, "Geometric Algebra Transformer", NeurIPS 2023
- [2] E. Buhmann et al, "EPiC-ly Fast Particle Cloud Generation with Flow-Matching and Diffusion", arXiv2310.00049
- [3] A. Butter et al, "Jet Diffusion versus JetGPT - Modern Networks for the LHC", arXiv:2305.10475
- [4] S. Badger and J. Bullock, "Using neural networks for efficient evaluation of high multiplicity scattering amplitudes", Journal of High Energy Physics 2020
- [5] S. Badger et al, "Loop amplitudes from precision networks", SciPost Physics 2023 | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their detailed feedback and questions.
We are excited to read that the reviewers found the work a "valuable contribution to the field" (reviewer **Pf88**), that they appreciated the "significance of [the] contributions" (reviewer **srsp**), and that they found our L-GATr "a tool that may be used by several members of the AI4science and AI4physics communities" (reviewer **PeE1**).
On a technical level, they appreciated that our transformer "generalizes GATr from E(3) to the Lorentz group" (reviewer **w7CH**), that "the Lorentz-equivariant flow matching is the first [such model]" (reviewer **w7CH**), and that we "conduct a sufficient number of experiments to test" it (reviewer **PF88**).
We are particularly encouraged by the reviewers finding that the paper is "well-written and effectively communicates its ideas" (reviewer **srsp**).
Their critique and questions are helping us improve the paper further. Here we want to highlight three points.
First, reviewers asked about **additional baselines** for the generative modelling experiment.
This is a great suggestion. We started experiments with SO(3)-equivariant architectures and non-equivariant autoregressive transformers [1]. We were not able to finish these in time for this rebuttal, but we will add the results and discuss them for the final version of this paper.
Second, reviewers asked about the relevance of **symmetry-breaking inputs**.
In two of our three experiments, we break the full Lorentz symmetry by providing the direction of the collider beam pipe as an input to L-GATr. This information is important, both to capture explicit symmetry breaking from the initial beam directions and to model soft breaking of the symmetry from detector effects. In the original paper, we did however not demonstrate this relevance empirically.
We performed a new experiment to measure the importance of symmetry-breaking inputs. We re-ran the top-tagging experiment for a version of L-GATr *without* the beam direction as an input. We found that this decreased the model's performance as follows:
Model | Accuracy | AUC | $1/\epsilon_B$ ($\epsilon_S = 0.5$) | $1/\epsilon_B$ ($\epsilon_S = 0.3$)
--- | --- | --- | --- | ---
L-GATr (original) | 0.9417 | 0.9868 | 548 | 2148
L-GATr (no beam direction) | 0.9403 | 0.9860 | 440 | 1840
L-GATr's ability to break Lorentz symmetry through the inputs is thus relevant in practice.
Finally, the reviewers were concerned about the **computational overhead and scalability** of L-GATr.
It's true that L-GATr has some computational overhead over a standard transformer architecture, but we find it sufficiently fast for all particle-physics applications we encountered.
In Appendix C.4 of our paper we show timing measurements. A forward pass through the tested model takes between 2ms and 100ms when up to 40k particles are processed. This is fast enough for almost all steps of the data-analysis pipeline of the Large Hadron Collider and in fact negligible compared to the simulators. (There are a few exceptions that require faster inference, most notably the trigger, but these specialized cases go beyond the scope of the paper.)
Compared to a standard transformer, L-GATr is slower for small numbers of particles. However, it is comparably fast when the number of particles is large, since then the attention operation is the bottleneck for both. Compared to a message-passing network, a popular type of equivariant architecture, L-GATr is consistently faster and scales much better to large numbers of particles. This is consistent with the prior work GATr [2, see Fig. 5 there] from which L-GATr is derived, which shares performance characteristics with L-GATr. Using a message-passing network with sparse connections, which can make it possibly faster than a transformer, is not possible in these particle-physics experiments, because the interaction strength between particles does not decay with the Minkowski distance between their momenta.
In addition to this scaling with the number of particles (or tokens), we also demonstrate in the paper that L-GATr scales well to millions of parameters and to large training datasets (see Fig. 3). Overall, we consider the scalability one of the strongest properties of the model.
We would like to thank the reviewers again. We hope that we were able to address their questions adequately and look forward to the discussion period.
**References**:
- [1] A. Butter et al, "Jet Diffusion versus JetGPT - Modern Networks for the LHC", arXiv:2305.10475
- [2] J. Brehmer et al, "Geometric Algebra Transformer", NeurIPS 2023 | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
FuseAnyPart: Diffusion-Driven Facial Parts Swapping via Multiple Reference Images | Accept (spotlight) | Summary: This paper introduces a facial parts swapping framework based on diffusion models, named FuseAnyPart. Unlike traditional methods that swap entire faces, FuseAnyPart allows for the swapping of individual facial features. The framework fine-tuned a pre-trained CLIP model to extract features from different facial regions and a stable diffusion model to generate the swapped face. To merge these features, an Addition-based Injection Module is utilized, incorporating the facial region features into the UNet feature map. Extensive experiments validate the effectiveness of the proposed method.
Strengths: 1. The paper is easy to follow.
2. Utilizing a diffusion model to achieve region-based face swapping is interesting.
3. Extensive experiments verify the effectiveness of the proposed methods.
Weaknesses: 1. The results in Figure 3 show that the skin color also changes when only the eyes are swapped, which the authors did not discuss.
2. The method of feature injection into the SD through addition has already been used in IPAdapter and ControlNet.
3. Since there is no swapped face used for training, is the source face directly used as the target face during training? If so, how is the feature decoupling between different regions achieved during inference?
4. The term `one-step' mentioned in line 65 contradicts the 50 steps of the ddim sampler described in the Implementation Details. The term ‘one-step’ should be used with caution in methods based on diffusion models.
Technical Quality: 3
Clarity: 2
Questions for Authors: Fusing multiple facial parts to generate a new face is an interesting idea. However, it is unclear how to evaluate the performances quantitatively. In addition, such generated faces are not visually plausible to me. The FID scores also indicate that the proposed method is not state-of-the-art.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The proposed method can only perform facial parts swapping on aligned faces, which limits its applications in practical situations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[W1. The problem of skin color change.]**
FuseAnyPart may encounter skin color changes, particularly when there is a significant difference between the skin colors of the source images and the target image.
This issue arises because the source and target images are fused in the latent space, and the global attention mechanism diffuses the color features.
However, this of skin color changes issue can be effectively resolved by replacing the generated skin regions with the inverted latent representations of the original skin color using DDIM inversion. This ensures the skin color region remain unchanged.
The improved results are illustrated in Figure 2 of the newly submitted PDF.
The pseudocode is as follows.
```plaintext
Algorithm: Merging with Ground Truth Latent
Input: latents_ref_img, init_noise, pixel_mask
Output: latents
1. Initialize thres_i
2. If i < thres_i
a. Retrieve noise_timestep from scheduler:
- noise_timestep <- scheduler.timesteps[i + 1]
b. Add noise to the latent of the reference image to obtain ground truth noised latent:
- gt_latent <- scheduler.add_noise(latents_ref_img, init_noise, noise_timestep)
c. Update latents using pixel mask:
- latents <- (1 - pixel_mask) * gt_latent + pixel_mask * latents
```
**[W2. Feature injection difference to IPAdapter and ControlNet.]**
IPAdapter is not an addition-based injection method. It employs a decoupled cross-attention strategy, where an additional cross-attention layer is added to each original cross-attention layer to inject image features. And our method could perform better in facial parts swapping tasks.
In Section 4.3 of our paper, titled "Cross-attention vs. Addition," we compared our feature injection method with cross-attention. The results indicate that our injection method is superior.
ControlNet is an addition-based injection method. We have not yet compared it in our experiments. However, for face parts swapping tasks, the majority of the target image remains unchanged, and the features needing injection are only those corresponding to the parts requiring replacement, thus eliminating the need for a complex network structure. Our method involves the addition of only linear layers, whereas ControlNet requires replicating the parameters of UNet. Therefore, our approach has a clear advantage regarding the increase in parameter count. For this task, our method is simple but effective.
**[W3. The training strategy.]**
FuseAnyPart uses reconstruction as a proxy task and does not require swapped paired faces, which draws inspiration from E4S and DiffSwap.
For instance, the target face image and the source face images, which provide different facial parts, are sampled from different images of the same identity.
Our paper details the training and inference pipeline in Lines 60-62.
A visual grounding model is employed to detect bounding boxes for feature decoupling across different regions.
**[W4. The term 'one-step'.]**
The 'one-step' mentioned in Line 65 means that the eyes, mouth, and nose can be swapped in a single inference, unlike traditional inpainting methods, where they are swapped sequentially.
Therefore, it does not contradict the 50 steps of the DDIM sampler.
We appreciate the reviewer's careful reading and will revise the paper in the final edition to clarify this point.
The term 'one-step' in Line 65 will be replaced with 'simultaneous'.
**[Q. Evaluate the performances quantitatively.]**
As described in Section 4.2, three metrics, including Fréchet Inception Distance (FID), Organ Similarity (Osim), and Reconstruction Mean Squared Error (MSE), are adopted to evaluate the performance quantitatively.
FID is used to evaluate the generated faces' overall image quality and visual realism.
Osim is used to evaluate how well individual features such as the eyes, nose, or mouth match between the generated image and the sources, focusing on the fidelity of the swap within localized regions.
The MSE metric measures the pixel-wise accuracy of the reconstructed image against the original. It helps quantify the preservation of the non-swapped parts of the face and the seamless integration of swapped parts.
Regarding the visual plausibility, we provided some qualitative results in the paper, and additionally, we have included more qualitative results in the newly submitted PDF.
The results showcase the efficacy of our method in generating plausible faces even when there are significant differences between the source and target images, including differences in age and race.
---
Rebuttal Comment 1.1:
Comment: Thanks for providing the rebuttal, which addressed my concerns. I have raised my score to borderline accept. | Summary: This paper delves into the strategy of facial parts swapping which are not studied before, their method aims at fusing facial parts from different sources into the overall background/face images. It involves masking facial landmark areas(i.e. eyes, mouth, nose) and fusing with mask-based operation. Finally, the conditions are injected into diffusion models with an Interpolation layer.
Strengths: 1. This paper addresses the problem of part-level facial at the feature level which is interesting.
2. The proposed injection method improves the final generation than the traditional cross-attention method.
3. Qualitative and Quantitative experiments have shown the effectiveness of the proposed method.
Weaknesses: 1. the evaluation metric(OSim) about whether the face is correctly swapped is not explained in detail, maybe providing more text description is easier for the reader to understand. For example, does the OSim-E input only include the eye image, or is it the entire image with only the eyes unmasked? Additionally, what is the label used for training OSim, is it based on identity?
2. the Addition-based Injection Module is simple, does advanced modules ( for example convolution/ self-attention) lead to inferior performance?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. lines 163-166, do you have literature or visualization results to prove 'method can be ineffective when image features are misaligned with textual features'?
2. What is the target/labelled image for training, considering there is no correct swapped reference image? Is it classifier-free guidance?
3. Is the code available?
4. Which version of the VAE model is used in this work? Is there any lightweight but effective pre-trained VAE?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: this work is interesting for part-based facial editing. But some details are missing such as the evaluation protocol OSim should be explained in detail, as well as the landmark detection network.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[W1. Details about OSim.]**
Osim measures the similarity between the swapped facial parts (eyes, nose, mouth) in the generated image and those in the reference images.
For example, we utilized the CelebA-HQ dataset and employed the grounding-dino detection model to identify and extract bounding boxes for the eyes.
Each cropped eye region (the eye image) was then labeled with the identity (ID) of the corresponding individual.
Then, we train the ResNet50 with arcface loss, which maximizes the intra-class similarity (among the same IDs) and minimizes the inter-class similarity (across different IDs).
After training, we use the ResNet50 model to extract feature vectors from the eye regions. Osim is computed using cosine similarity between these feature vectors: $Osim = \frac{f(a)\cdot f(b)}{\Vert f(a) \Vert \Vert f(b) \Vert}$, where f represents the feature extraction function implemented by our trained ResNet50 model and a and b are the input images of the specific organ.
**[W2. Alternatives to the Addition-based Injection Module.]**
The Addition-based Injection Module beats multiple attention-based baselines in Table 2 and Figure 7.
Although simple, the Addition-based Injection Module is the most suitable for the facial parts swapping task and validated by extensive experiments.
**[Q1. The visualization results.]**
This point refers to the baseline method, IP-Adapter.
IP-Adapter depends on both text and image prompts.
If the text prompt is inaccurate or even conflicts with the image prompt, it may have little effect on the results, leading to outcomes that do not align with the text prompt.
We will clarify this point to ensure it is clear in the final edition.
Please check the Figure 5 in the newly submitted PDF.
**[Q2. The target image for training.]**
FuseAnyPart uses reconstruction as a proxy task for training and does not require swapped paired faces, which draws inspiration from E4S and DiffSwap.
For instance, the target face image and the source face images, which provide different facial parts, are sampled from different images of the same identity.
Our paper details the training and inference pipeline in Lines 60-62.
**[Q3. Code available.]**
Once the paper is accepted, the training and inference code, model weights, and essential documentation will be released to the public.
The authors aim to make a valuable contribution to the NeurIPS community.
**[Q4. The VAE version.]**
FuseAnyPart is based on the widely used SD-v1.5, and the VAE model utilized in this work is the same as that in SD-v1.5.
The authors acknowledge that a lightweight yet effective pre-trained VAE may exist, but exploring this is beyond the scope of FuseAnyPart.
**[L. The landmark detection network.]**
A visual grounding model is used to detect bounding boxes around faces, rather than relying on a landmark detection network, as described in Line 216.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the rebuttal. I think this response has addressed my concerns. I will keep the scores as it is, borderline accept. | Summary: This paper explores the partial face swapping problem. Rather than swapping the whole face from A to B, partial face swapping aims to swap some specific area (or organ) of A to B. In this paper, a diffusion based partial face framework is proposed. Besides, two modules are designed to better fuse the extracted feature into the diffusion unet. Facial Feature Decomposition effectively extracts facial semantics and Addition based Injection module integrates the semantics into the diffusion model. Further experiments demonstrate the effectiveness of the propsoed framework.
Strengths: 1. The task (partial face swapping) is interesting and more challenging compared to whole face swapping. It needs fine-grained controls and thus worth exploring.
2. The storyline of this paper is simple and clear.
Weaknesses: 1. The paper says "the primary challenge in facial parts swapping lies in the fusion mechanism". Could you please detail it? In my point of view, this challenge also exists in conventional face swapping task. Face swapping manipulates the face area of an image while partial face swapping manipulates a smaller area. Previous mask-based face swapping methods first generate faces with the same expression as the target face and then pastes it to the target face according to the mask.
2. Some details are missing in the method:
* What is the $z_T$ (the initial point of the denoising process) used in the training and inference? is it a Gaussian noise or the target image?
* Why the training objective is to reconstrct the target (Eq 5)?
* What is the dimension of The extracted feature $f$? If it is a H*W feature map, how to fed it into MLP?
3. There are two few qualitative results in the article (the main paper and supp). The experiments adopt CelebA datasets which do not have IDs, how to sample from the same ID (Line 207)? I guess it is sampled from the same image?
4. More comparisons with conventional face swapping method (fsgan, simswap, diffFace) should be given.
Technical Quality: 2
Clarity: 2
Questions for Authors: See the weaknesses.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The author have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[W1. The primary challenge lies in the fusion mechanism.]**
Traditional face-swapping methods first perform face reenactment and then paste it onto the target image in pixel space, as illustrated in FSGAN and DiffFace.
However, operations in pixel space often result in unnatural images with visible seams, leading to a low success rate in practical applications and producing many poor outcomes as shown in Figure 3 of the newly submitted PDF.
The current mainstream face-swapping techniques now perform the fusion of source and target images in the latent space, resulting in more harmonious generated images.
Therefore, the feature fusion mechanism becomes critical in affecting the quality of the generated images.
In the facial parts swapping task, the number of source images increases from one to multiple, further complicating the fusion process and making this issue more prominent.
As shown in Table 2 and Figure 1 of this paper, the authors conducted extensive experiments to validate the superiority of the proposed fusion mechanism quantitatively and qualitatively.
We thank the reviewer for highlighting this issue and suggesting that the task description be more precise.
However, we hope the unique aspects and challenges of the task are adequately appreciated.
**[W2. The details.]**
1) **The Initial Point of the Denoising process.** During the training phase, the initial point consists of concatenating the target image with noise in latent space, the masked image in latent space, and the mask itself, as illustrated by z\_t' in Figure 2.
In the inference phase, the target image with noise in latent space is replaced with Gaussian noise at the initial point.
The above is discussed in Lines 186-189 of the paper.
2) **The training Objective.** FuseAnyPart uses reconstruction as a proxy task and does not require swapped paired faces.
For instance, the target face image and the source face images, which provide different facial parts, are sampled from different images of the same identity.
Therefore, reconstructing the target image allows the FuseAnyPart to gain the ability to generate natural images by merging different organs.
Our paper details the training and inference pipeline in Lines 60-62.
3) **The dimension of the extracted feature.**
The feature $f$ from the CLIP image encoder has a shape of $(h\times w) \times c$, which corresponds to $h\times w$ visual tokens.
This feature $f$ can be input into an MLP to align with the dimensions of the latent features in UNet.
**[W3. Questions on the CelebA dataset.]**
The CelebA dataset now includes identity annotations in the “Anno/Identity\_CelebA.txt file,” which can be downloaded from the official CelebA website. During training with FuseAnyPart, source and target images are sampled from different images of the same identity.
More qualitative results are added in Figure 4 in the newly submitted PDF, and they will be added in the final edition of the paper.
**[W4. More comparisons.]**
The state-of-the-art face-swapping methods supporting face parts swapping are all included in Table 1 of our paper.
According to their papers, FuseAnyPart beats Diffswap in FID and OSim, and Diffswap beats SimSwap(2020) and FSGAN(2019).
A qualitative comparison of FuseAnyPart with DiffFace is shown in Figure 3 of the newly submitted PDF, demonstrating a significant advantage of FuseAnyPart.
---
Rebuttal Comment 1.1:
Comment: Thanks for this rebuttal. I raise my score to borderline accept. | Summary: "FuseAnyPart: Diffusion-Driven Facial Parts Swapping via Multiple Reference Images" introduces a novel framework for swapping individual facial parts using a diffusion model that effectively utilizes multiple reference images. The paper outlines the methodological innovation and superiority of FuseAnyPart over traditional GAN-based and diffusion-based methods, which primarily focus on full-face swapping. This approach enables high-fidelity and cohesive blending of facial parts from disparate sources, enhancing fine-grained character customization capability.
Strengths: * **Originality**: This paper introduces a unique application of diffusion models to the problem of facial parts swapping, diverging from the traditional focus on full-face swaps.
* **Quality**: Demonstrates improved quality and robustness in facial parts swapping through qualitative and quantitative results.
* **Clarity**: The paper is articulate and well-organized, with thorough explanations and clear visual aids that enhance the understanding of the proposed method.
* **Significance**: Offers significant practical applications in various fields, including digital media creation and personalized entertainment.
Weaknesses: * **Complexity**: The computational complexity might limit its application in real-time or on lower-end devices.
* **Scope of Data**: More diverse testing on datasets from various demographics could enhance the robustness and generalization claims.
* **Dependency on High-Quality Inputs**: The method's effectiveness relies heavily on the quality of input images, which could limit its applicability in less-controlled environments.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. What is the performance impact when using lower-quality or varied lighting conditions in input images?
2. Could this method be adapted for real-time applications, and if so, what optimizations would be necessary?
3. How does the method perform when facial parts from significantly different racial or age groups are swapped?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors adequately discuss the limitations, including the high computational requirements and the dependency on high-quality reference images. They also mention potential challenges in diverse application scenarios, which is vital for setting realistic expectations for the method's deployment.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[W. The weakness of FuseAnyPart.]**
1) Diffusion models typically have high computational complexity due to the need for recursive iterations, which limits their applications to real-time or on lower-end devices.
2) More diverse testing results are illustrated in Figure 1 in the newly submitted PDF to validate the robustness and generalization of FuseAnyPart.
3) Most models rely on high-quality training datasets. The quality of images can be improved using super-resolution methods.
For example, the dataset used in this work is CelebA-HQ, which is reconstructed with super-resolution through GANs.
**[Q1. Performance of low quality inputs.]**
The performance of FuseAnyPart may be affected by lower-quality or varying lighting conditions input images.
However, some preprocessing techniques, such as facial alignment and **super-resolution**, can be adopted to mitigate these effects.
**[Q2. Optimizations for real-time applications.]**
Algorithms like **Latent Consistency Models (LCM)** can overcome the slow iterative sampling process of Diffusion Models, enabling fast inference with minimal steps instead of the usual dozens or hundreds.
In engineering, techniques like int8 **model quantization** can significantly reduce computational load.
Together, these strategies can speed up FuseAnyPart.
**[Q3. Results of significantly different racial or age groups.]**
These results are shown in Figure 1 in the newly submitted PDF. | Rebuttal 1:
Rebuttal: Dear reviewers and meta reviewers,
We appreciate all reviewers for their valuable comments and suggestions.
We have carefully addressed the comments and added details and comparisons as follows:
- We have provided solutions to accelerate inference and improve low data quality.
- We have added a wider range of testing results, particularly for significantly different race and age groups.
- We have addressed the issue of skin color variation and provided visual demonstrations of the outcomes.
- We have detailed why the fusion mechanism poses the primary challenge in facial part swapping.
- We have clarified the training process and specific variables for better understanding.
- We have outlined the training objective of FuseAnyPart, which uses reconstruction as a proxy task.
- We have included new baseline results from DiffFace for qualitative comparison.
- We have resolved the issue concerning the identity files in the CelebA dataset.
- We have added details regarding the evaluation metric (OSim).
- We have included information on the Addition-based Injection Module.
- We have revised the term "one-step" to "simultaneously" for clarity.
We will release our code and checkpoints in the camera-ready version, and please see below our responses to each reviewer.
If you have any questions or suggestions, please feel free to leave your comments on OpenReview.
Authors of FuseAnyPart
Pdf: /pdf/dac18d7b8ba2948e94f14be6200680c3ab3ddab6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Can Simple Averaging Defeat Modern Watermarks? | Accept (poster) | Summary: This paper introduces a study on the vulnerabilities of digital watermarking techniques to steganalysis attacks. Extensive experiments are conducted to demonstrate the effectiveness of steganalysis in detecting and removing watermarks from images, especially when targeting content-agnostic watermarking methods.
Strengths: 1. The paper is well-written and well-organized. The watermark detection and removal/forgery methods are simple and clear.
2. The method of estimating the watermark disturbance by subtracting the average of watermarked images from the average of original images is interesting.
3. The proposed watermark attacks are effective for content-agnostic watermarking methods.
Weaknesses: 1. This paper adopts a simple linear assumption that the watermarked image is obtained by adding the content-agnostic perturbation to the original image. The generality of this assumption needs to be clarified, as it forms the theoretical foundation for the watermark attacks discussed in this paper. For instance, the Tree-Ring method embeds information in the frequency domain of initial noise, but why does it conform to this model?
2. The definition of steganalysis needs to be clearly stated, as it is an important concept in this paper.
3. In section 3.1, the paper mentions that ''Instead of applying strong distortions (e.g., noise), the adversary could take a steganalysis approach, first **approximating w** and then crafting minimal distortions to fool D''. However, in the following method, the watermark is never approximated. Instead, the additive perturbation is directly predicted to fool D.
4. In Figure 2, why are the performance metrics inconsistent? Although the caption specifies the use of AUC, the figure mixes AUC with Bit Acc as metrics for different methods.
5. It is better to explain why there is a sudden drop when n = 5 for content-adaptive methods like Stable Signature.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations in section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Why Tree-Ring conforms to the additive Simple Linear Assumption model?
1. Addition in spatial domain equates to addition in frequency domain: $x(t)+y(t) \xleftrightarrow{\mathcal{F}} X(j\omega)+Y(j\omega)$.
2. Tree-Ring's pattern added to the initial noise propagates through the generation pipeline to the final image (Figure 6).
3. Thus, Tree-Ring conforms to our model, despite appearing to be a Ring in the frequency domain.
> The definition of steganalysis needs to be clearly stated, as it is an important concept in this paper.
We appreciate the suggestion. Steganalysis is the detection of hidden data within cover media. In our context, it's applied as a technique for watermark removal. We'll clarify this definition in the revised paper.
> In section 3.1, the paper mentions that ''Instead of applying strong distortions (e.g., noise), the adversary could take a steganalysis approach, first approximating w and then crafting minimal distortions to fool D''. However, in the following method, the watermark is never approximated. Instead, the additive perturbation is directly predicted to fool D.
Your understanding is correct. It was $\delta_w$ we approximated (denoted as $\hat{\delta}_w$), not $w$. We appreciate this observation and we will revise Section 3.1 to accurately reflect our method.
> In Figure 2, why are the performance metrics inconsistent? Although the caption specifies the use of AUC, the figure mixes AUC with Bit Acc as metrics for different methods.
Thanks for catching. The caption should be "Performance (AUC or bit accuracy)", not just AUC. We will revise.
> It is better to explain why there is a sudden drop when n = 5 for content-adaptive methods like Stable Signature.
- **The point at n=5 isn't a drop, but a discontinuity** between no removal (leftmost point) and watermark removal (other points).
- The lowest performance at n=5 occurs because small n values lead to imperfect $\delta_w$ approximations, causing larger image modifications and reduced watermark detection. We're happy to provide further clarification if needed."
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. Most of my concerns are addressed. I still have questions about additive simple linear assumption model. Specifically, why such pattern added to the initial noise can be propagate through the diffusion pipeline to the final image, as such a generation pipeline is not a simple linear system? While experiments in Figure 6 support this the concept, a theoretical analysis would be beneficial to provide a deeper understanding of the underlying mechanisms.
---
Reply to Comment 1.1.1:
Comment: To address this question, we provide a brief theoretical analysis that offers insight into the underlying mechanisms.
Tree-Ring's diffusion pipeline utilizes DDIM sampling. One DDIM denoising step from a noisy image $\mathbf{x}\_t$ to a less noisy one $\mathbf{x}\_{t-\Delta t}$ is described by Equation 13 in [1], which can be rearranged as:
$$
\mathbf{x}\_{t-\Delta t} = \frac{\sqrt{\alpha\_{t-\Delta t}}}{\sqrt{\alpha\_t}}\mathbf{x}\_t+c\cdot \epsilon\_\theta^{(t)}(\mathbf{x}\_t),
$$
where $c$ is a time-dependent constant. Tree-Ring can be viewed as adding a systematic bias $\mathbf{\mu}$ (the ripple-like pattern visualized in Figure 6) to an initial noise vector $\mathbf{x}\_T$. Consequently, the first sampling step can be rewritten as:
$$
\mathbf{x}\_{T-\Delta t} = \frac{\sqrt{\alpha\_{t-\Delta t}}}{\sqrt{\alpha\_t}}(\mathbf{x}\_t + \mathbf{\mu})+c\cdot \epsilon\_\theta^{(t)}(\mathbf{x}\_t + \mathbf{\mu}).
$$
Empirical evidence suggests that the output of $\epsilon\_\theta$ follows a zero-mean Gaussian distribution, regardless of the timestep and the presence of bias $\mathbf{\mu}$ in the input. Therefore, the term $c\cdot \epsilon\_\theta^{(t)}(\mathbf{x}\_t + \mathbf{\mu})$ does not affect the accumulation of influence from $\frac{\sqrt{\alpha\_{t-\Delta t}}}{\sqrt{\alpha\_t}}\mathbf{\mu}$ through sampling. The accumulated bias term at $\mathbf{x}\_0$ can be expressed as:
$$
\frac{\sqrt{\alpha\_{t\_1}}}{\sqrt{\alpha\_T}}\cdot
\frac{\sqrt{\alpha\_{t\_2}}}{\sqrt{\alpha\_{t\_1}}}\cdot
\frac{\sqrt{\alpha\_{t\_3}}}{\sqrt{\alpha\_{t\_2}}}\cdots
\frac{\sqrt{\alpha\_{0}}}{\sqrt{\alpha\_1}}
\mathbf{\mathbf{\mu}}
=\frac{\sqrt{\alpha\_0}}{\sqrt{\alpha\_T}}\mathbf{\mu}
=\frac{\sqrt{0.9991}}{\sqrt{0.0047}}\mathbf{\mu}
=14.5799\mathbf{\mu}
$$
in Tree-Ring's implementation. This calculation demonstrates that the bias $\mathbf{\mu}$ is significantly amplified throughout the generation process.
Assuming that the low-level pattern is largely preserved through VAE decoding, this amplification explains how a content-agnostic ripple-like pattern can propagate through the complex generation process and manifest in the final images. This analysis provides a theoretical foundation for why the additive simple linear assumption model is still valid with the presence of a diffusion generation pipeline.
We're happy to help with any more questions or concerns you might have.
[1] Song et al., Denoising Diffusion Implicit Models, ICLR 2021. | Summary: The paper introduces steganalysis techniques targeting watermark removal and forgery. The authors demonstrate through experiments that existing content-agnostic watermarking methods are unable to resist steganalysis attacks, advocating for the adoption of content-adaptive strategies.
Strengths: This paper is interesting and well-written. The author introduces steganalysis attack can remove and forge watermarks from watermark images using simple linear addition and subtraction operations in both gray-box and black-box modes.
Weaknesses: 1. Although the examples provided by the authors in Figure 7 show that the images after watermark removal exhibit good image quality compared to the watermarked images. However, the PSNR data shown in Figure 2 are not ideal. For instance, in the first example "Tree-Ring," the average PSNR of the removed watermark image is only 17. According to [1], distortion of modified images with PSNRs lower than 36 dB is noticeable to human visual system. If the quality of the image is severely degraded after watermark removal, then the act of removing the watermark becomes meaningless.
2. The author said “In contrast, traditional distortion-based removal techniques, such as noise perturbations or blurring, typically result in substantial perceptual degradation (as visualized in Figure 9 in Appendix A.2)”. But I think comparing perceived degradation like this isn't fair because it's difficult to objectively evaluate the severity of different attacks.
3. While the authors present an interesting steganalysis method for removing or forging content-agnostic watermarks, the contribution of this approach may be somewhat limited. Additionally, the method's requirement for a large number of pre-existing watermarked images poses practical challenges for real-world applications.
[1] S. Sarreshtedari and M. A. Akhaee, “A source-channel coding approach to digital image protection and self-recovery,” IEEE Trans. Image Process., vol. 24, no. 7, pp. 2266–2277, Jul. 2015.
Technical Quality: 2
Clarity: 3
Questions for Authors: See Weaknesses
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: 1. The premise of steganalysis for watermarks requires a large number of images embedded with watermarks, unlike attacks such as JPEG compression, which can directly affect the watermark itself. This limitation makes steganalysis attacks somewhat impractical in real-world applications because attackers may not always have access to a sufficient number of watermarked images.
2. This method is only effective against content-agnostic watermarking.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Why the reported PSNR for Tree-Ring is low (17dB)?
**Short Answer: Our method achieves 29.79dB/34.58dB (blackbox/graybox, n=5000) PSNR when removing Tree-Ring watermarks, indicating minimal visible alteration. The 17dB PSNR value mentioned in the paper measures a different aspect.**
$$
\text{Clean image } A \overset{\text{Add watermark}}{\longrightarrow} B \overset{\text{Remove watermark}}{\longrightarrow} C
$$
The 17dB PSNR reported in the paper compares $A$ and $C$, reflecting the cumulative impact of watermarking and removal (Sec 4.1). However, this doesn't represent the degradation caused by watermark removal alone.
To assess removal-induced degradation, we calculate PSNR between $B$ and $C$, yielding 29.79dB/34.58dB (blackbox/graybox, n=5000) for Tree-Ring watermarks.
Tree-Ring, a diffusion-based semantic watermark method, alters image layout significantly. This results in a low PSNR (~16dB) between $A$ and $B$ (Figure 2 NRmv). Consequently, $C$'s PSNR relative to $A$ is also low due to layout differences, despite $C$ closely resembling $B$. This low $A$-$C$ PSNR doesn't accurately reflect the extent of modification during watermark removal.
> Comparing perceived degradation like this isn't fair because it's difficult to objectively evaluate the severity of different attacks.
**Short Answer: We evaluated both subjectively and objectively. The objective quality evaluation is in Figure 3, which is supplimented by the visualizations (subjective quality evaluation) in your mentioned Figure 9.**
Figure 9 supplements Figure 3 in the main text, which already includes objective quality evaluations. We evaluated four metrics: PSNR, SSIM, LPIPS, and SIFID, and plotted the trade-off curves between watermark removal effectiveness (vertical axis) and image quality degradation (horizontal axis) for different distortion methods. Methods closer to the lower-left corner of the graph are better because they achieve significant watermark removal with minimal distortion. Figure 9 visualizes the data points and their parameters used to plot these curves, thus serving as a complement to Figure 3.
> The method's requirement for a large number of pre-existing watermarked images poses practical challenges for real-world applications. Attacks such as JPEG compression can directly affect the watermark itself.
- **The setting is practical:** For example, popular products like Stable Diffusion defaults to adding a fixed watermark to all generated images. Online platforms like Midjourney can add a unique watermark to all images generated by the same user. What's more, these products produce a large number of images everyday. For these popular products, it is possible to obtain tens or hundreds of pre-existing watermarked images.
- **Steganalysis requires fewer images than recent works:** Many watermark detectors like Tree-Ring have demonstrated robustness against JPEG compression. Recent works have explored adversarial attack-based methods to improve watermark removal. While these methods have successfully removed Tree-Ring watermarks, they require training a surrogate model with 3000 to 7500 images [1, 2]. In comparison, steganalysis requires as few as 500 images for effective removal without significant artifacts, as shown in our experiments.
> The contribution of the approach may be somewhat limited.
We argue that our work has made significant contributions. As outlined in our common rebuttal:
- **We are the first to introduce a black-box, training-free method** that effectively removes and forges Tree-Ring watermarks, which is a SOTA diffusion-based watermarking method.
- **We offer a new perspective on understanding watermarks** by classifying them and visualizing the patterns of content-agnostic watermarks, providing explainability on why these watermarks are vulnerable.
- **We highlight that addressing the content-agnostic issue is a future direction** for robust watermarking, as we demonstrated the wide impact of simple steganalysis on several modern detectors, such as Tree-Ring, RAWatermark, and RoSteALS.
**References**
[1] M. Saberi et al., Robustness of AI-image detectors: Fundamental limits and practical attacks, 2023.
[2] B. An et al., Benchmarking the robustness of image watermarks, 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your detailed response. I appreciate that most of my questions have been addressed. I will increase my score; however, given the inherent limitations of this method, which is ineffective for content-adaptive issues, I will raise the score to 5 (borderline accept).
---
Reply to Comment 1.1.1:
Comment: Thank you for addressing our responses positively. We appreciate your recognition of our rebuttal and the contributions of our work. We understand the limitation you've mentioned, which we have repeatedly mentioned in the paper as well.
Additionally, we kindly wanted to bring to your attention that the score hasn't been updated yet. We appreciate your time and consideration in doing so. | Summary: The paper addresses vulnerabilities in digital watermarking methods, especially those that are content-agnostic. These methods, which embed fixed watermark patterns regardless of image content, are susceptible to steganalysis attacks that can extract and manipulate these patterns, potentially removing or forging watermarks with minimal impact on perceptual quality.
Strengths: The findings of this paper are interesting, and it is reasonably well-written and easy to follow.
1. This paper introduces a novel method for steganalysis that effectively exposes vulnerabilities in content-agnostic digital watermarking techniques. Using simple averaging to extract watermark patterns, the study demonstrates a practical approach that can be applied in gray-box and black-box settings.
2. This paper provides a thorough evaluation of various watermarking methods under different settings. It includes quantitative and qualitative analyses across eight different watermarking techniques, highlighting the particular susceptibility of content-agnostic methods to steganalysis attacks.
3. Recognizing the limitations of current watermarking methods, the paper proposes actionable security guidelines and strategies for enhancing digital watermark security.
Weaknesses: 1. The steganalysis method proposed in this paper is fundamentally general and not restricted to images. However, the experiments are confined to analyzing image watermarks. Providing evidence of successful steganalysis in other modalities, such as audio, would further validate and broaden the method's impact.
2. The steganalysis method proposed in this paper can extract a unified low-level watermark pattern from tree-ring watermarked images. So why can the tree-ring change the layout of the generated image?
3. The quality of the figures could be further improved. For example, the font in Figure 2.3.4 is too small. Please enlarge it for easier reading.
Technical Quality: 4
Clarity: 3
Questions for Authors: Please see above.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Please see above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The method is fundamentally general and not restricted to images. Providing its effectiveness on other modalities, such as audio, would further validate and broaden the impact.
**You are correct that the issue extends beyond images.** After conducting steganalysis on Audioseal [1] invisible audio watermarks, we found that it also adds content-agnostic watermark that could be removed (<0.76 detection rate). Your suggestion even promotes the significance of our work serving as a reminder of the re-emerging content-agnostic issue.
| Graybox | NRmv | 5 | 10 | 20 | 50 | 100 | 200 | 500 | 1000 | 2000 | 5000 |
|----------------|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|
| Detection rate | 1.000 | 0.376 | 0.463 | 0.540 | 0.636 | 0.675 | 0.694 | 0.737 | 0.750 | 0.750 | 0.755 |
| SNR | 30.432 | 26.647 | 27.435 | 28.690 | 29.516 | 29.828 | 30.107 | 30.102 | 30.305 | 30.408 | 30.343 |
| Blackbox | NRmv | 5 | 10 | 20 | 50 | 100 | 200 | 500 | 1000 | 2000 | 5000 |
|----------------|:-------:|:-------:|:------:|:------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|
| Detection rate | 1.000 | 0.352 | 0.244 | 0.269 | 0.399 | 0.517 | 0.561 | 0.637 | 0.622 | 0.669 | 0.729 |
| SNR | 30.432 | -0.146 | 3.290 | 7.865 | 13.468 | 16.908 | 19.534 | 22.235 | 24.502 | 26.831 | 28.450 |
> Why can Tree-Ring change the layout of the generated image?
**Adding a Tree-Ring watermark inevitably changes the generation seed.** Replacing the low-frequency part of the initial noise with a Ring pattern will cause global changes of the noise pattern, and hence the layout of the generated image will change. However, we have visualized that Tree-Ring also contains a content-agnostic component, and removing this part of the watermark is enough to fool the tree-ring watermark detector, which makes Tree-Ring vulnerable.
> The quality of the figures could be further improved.
We will increase the font size in Figure 2-4 to enhance readability, and improve the overall quality of the figures.
**References**
[1] Robin San Roman et al., Proactive Detection of Voice Cloning with Localized Watermarking, 2024.
---
Rebuttal 2:
Comment: Thank you for your response. I would appreciate some further clarifications on the following points:
1. **Regarding the setup of audio experiments**
Could you specify which dataset you used and whether any preprocessing was applied?
2. **On content-adaptive audio watermarks**
Have you observed any content-adaptive audio watermarks? Are the conclusions consistent with those on images? Could you share some results?
3. **Clarification on Tree-Ring**
To confirm, are you suggesting that the tree-ring affects the generated images in both high-level aspects (evident in the layout changes of generated content) and low-level aspects (appearing as a content-agnostic pattern overlaid on the generated images)? If this is the case, I would recommend explicitly stating in the paper that "watermarks added in the latent domain of diffusion models may not always result in a semantic watermark and could remain content-agnostic." Highlighting this risk is crucial for future security of diffusion watermarking.
---
Rebuttal Comment 2.1:
Comment: Thanks for your thoughful feedback. We now respond to your follow-up questions:
1. **Regarding the setup of audio experiments**: We randomly sampled audio clips from the zh-CN subset of the Common Voice dataset [1]. For black-box settings, we ensured that the unwatermarked and watermarked audio sets were disjoint. All audio clips were sampled at 16kHz and preprocessed by cropping to retain only the first two seconds. This setup ensures that all audio clips contain watermarked segments and are aligned [2, 3].
2. **On content-adaptive audio watermarks**: Yes, we observed that WavMark [2] adds content-adaptive watermarks to audio clips.
| Blackbox | NRmv | 5 | 10 | 20 | 50 | 100 | 200 | 500 | 1000 | 2000 | 5000 |
|----------|:-------:|:-------:|:-------:|:------:|:------:|:------:|:-------:|:-------:|:-------:|:-------:|:-------:|
| Bit Acc | 0.82 | 0.0088 | 0 | 0 | 0.01 | 0.0256 | 0.0294 | 0.1062 | 0.25 | 0.3669 | 0.85 |
| SNR (dB) | 44.0328 | -6.7481 | -4.0659 | 0.0338 | 5.5133 | 9.1686 | 13.0371 | 17.1913 | 20.6909 | 24.5124 | 28.1818 |
When averaging 5000 audio samples, the bit accuracy after watermark removal still reached 0.85. This demonstrates that steganalysis is not effective in reducing the watermark's bit accuracy. This finding is consistent with our conclusions from image experiments, indicating that across various modalities, our method can remove content-agnostic watermarks but is ineffective against content-adaptive watermarks.
3. **Clarification on Tree-Ring**: Exactly. We are demonstrating that adding a Tree-Ring watermark affects both aspects. Tree-Ring inevitably changes the initial noise, which leads to changes in layout. This has been visualized in the Tree-Ring paper. Simultaneously, our experiments prove that Tree-Ring's watermarking process also adds a content-agnostic component. It is primarily this content-agnostic pattern that affects Tree-Ring's watermark detection.
---
[1] R. Ardila et al., Common voice: a massively-multilingual speech corpus, LREC 2020.
[2] G. Chen, WavMark: Watermarking for audio generation, Arxiv 2023.
[3] R. S. Roman, Proactie detection of voice cloning with localized watermarking, ICML 2024.
---
Rebuttal 3:
Comment: Thank you for the authors' clarification and extra experiments. My concerns have been addressed. I will raise my score to 8. If possible, please incorporate the extra experiments and discussion into the next version of the paper.
---
Rebuttal Comment 3.1:
Comment: Thanks for your thoughtful feedback and for raising the score. We'll include the additional experiments and discussion, which we also believe could broaden the impact of our work. | Summary: This paper introduces a steganalysis-based attack aimed at image watermarking methods. The attack is effective in both gray-box and black-box scenarios and focuses on identifying repeating patterns present in watermarked images. These patterns can be exploited to either remove watermarks or add them to non-watermarked images without needing access to the original watermarking algorithm. Experimental results demonstrate that this attack can successfully compromise watermarks that use content-agnostic watermark patterns.
Strengths: - The authors illustrate the vulnerability of content-agnostic watermarks to simple attacks, serving as a good reminder for researchers to ensure their watermarks are content-aware.
- The proposed attack is simple and compared to some existing watermark attacks, has a lower computational cost. However, it's worth noting that the computational cost of other attacks is usually not significant either, so the value of this simplicity and speed-up remains debatable.
Weaknesses: - Similar ideas (i.e., considering watermarked and non-watermarked samples to extract watermark patterns for removal and forgery) have been explored in existing works [1][2]. For instance, [1] discusses a spoofing attack using a noise image $X$, watermarking it to obtain $X_{wm}$, and then adding $(X_{wm}-X)$ to clean images to create watermarked versions. Both [1] and [2] also describe a surrogate model adversarial attack, which involves training a CNN to distinguish between watermarked and non-watermarked images. This CNN is then used to adversarially alter samples to change their watermark status. This adversarial attack is a more advanced version of the method suggested in this paper and can potentially target content-aware watermarks as well.
- The authors claim, "Notably, our steganalysis-based approach is the first to effectively remove Tree-Ring watermarks without access to the algorithm," which is inaccurate. The adversarial attack proposed in [1], though ML-based, requires only a set of watermarked and non-watermarked samples (similar to the black-box attack described in this paper) to break Tree-Ring.
- The method introduced in this paper lacks novelty, and the vulnerability of content-agnostic watermarks to this type of attack is trivial and not a significant discovery.
[1] Saberi, M., Sadasivan, V. S., Rezaei, K., Kumar, A., Chegini, A., Wang, W., and Feizi, S. Robustness of AI-image detectors: Fundamental limits and practical attacks, 2023.
[2] An, B., Ding, M., Rabbani, T., Agrawal, A., Xu, Y., Deng, C., Zhu, S., Mohamed, A., Wen, Y., Goldstein, T., et al.: Benchmarking the robustness of image watermarks, 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: - I suggest that the authors compare their attack to existing watermark attacks (e.g., from this watermark benchmark [1]). This comparison is necessary to illustrate the advantages of their attacks.
- The results for the forgery attack are provided only for the Tree-Ring watermark. I recommend including results for other watermarks as well to provide a more comprehensive evaluation.
[1] An, B., Ding, M., Rabbani, T., Agrawal, A., Xu, Y., Deng, C., Zhu, S., Mohamed, A., Wen, Y., Goldstein, T., et al.: Benchmarking the robustness of image watermarks, 2024.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: - The attack only works on content-agnostic watermarks, which the authors adequately mentioned in several parts of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Similar ideas have been explored in existing works [1, 2].
We argue that our work fundamentally differs from [1, 2] with distinctive advantages.
**Differences with the spoofing attack in [1]**
The spoofing attack described in [1] requires access to the watermark encoder (white-box) and does not generalize to removal, while our steganalysis-based approach unifies removal and forgery without accessing the decoder (black-box). In terms of explainability, our method extends beyond [1] by using steganalysis to extract content-agnostic watermarks. This step enables visualizing to show the existence of the extracted and added patterns during forgery, whereas [1], lacking this step, cannot explain why and which component of the weak overlayed watermark signals enables spoofing. We thus believe the idea explored by [1] is fundamentally different from our work.
**Differences with adversarial attacks in [1, 2]**
Steganalysis, adversarial, and distortion-based attacks are parallel methods, each fundamentally different from the others. Steganalysis attacks involve extracting hidden information to facilitate removal, adversarial attacks focus on solving optimization problems to harness adversarial examples, and distortion-based attacks typically employ image-processing techniques to create distortions. They are not variations of one another; therefore, we don't think one is a more advanced version of another.
Specifically, steganalysis has advantages over the adversarial attacks described in [1, 2]: our approach can effectively remove watermarks from just 500 images without leaving noticeable artifacts, seamlessly unifying both removal and forgery under whitebox and blackbox scenarios. In contrast, the adversarial attacks in [1, 2] require at least 3000 images for training. The attack in [2] also does not support forgery, as perturbations disrupt the entire latent space. The claim about the potential applicability of AdvCls to content-aware watermarks remains debatable [1, 2], as both work only demonstrated effectiveness in removing Tree-Ring watermarks unless with large perturbations ($\epsilon>10$) [1]. [1] fails on StegaStamp. [2] fails on Stable Signature and StegaStamp.
**In conclusion**, the uniqueness of steganalysis seamlessly unifies removal and forgery with visualizable explainability under both whitebox and blackbox settings. It requires significantly fewer images than adversarial attacks, further showcasing its advantage over existing approaches.
> Steganalysis is not the first blackbox method, as [1] also works under blackbox.
Thanks for pointing out. We will rephrase to clarify that our method is the first that is training-free.
> The method lacks novelty, and the vulnerability is trivial.
We argue that our method is novel, and the vulnerability is severe. As outlined in our common rebuttal:
- **Novelty**
- We are the first to introduce a blackbox training-free method that effectively removes and forges Tree-Ring watermarks.
- We are the first to explain what modern content-agnostic watermark patterns look like (such as the Tree-Ring ripples), and why they can be easily removed.
- We highlight a future direction for robust watermarking: to address the content-agnostic issue by evaluating against steganalysis attacks, similar to how robustness is evaluated against distortions.
- **Severity and Impact**: Our experiments show that a sufficiently simple and straightforward method easily fools several modern watermark detectors. The vulnerability is therefore severe with wide impact.
> Comparison with the watermark benchmark [2].
We compared with attacks in [2] following its Detection setting, reporting TPR@0.1%FPR (`low_1000` in its code). The comparison is on Tree-Ring watermarks, which is the watermark that could be successfully removed by both [2] and our work.
|Method|TPR@0.1%FPR|
|---|---|
|Dist-Rotation / Rcrop / Erase / Bright / Contrast / Blur / Noise / JPEG|0.009 / 0.013 / 1.000 / 0.992 / 0.995 / 0.230 / 0.945 / 0.688|
|DistCom-Geo / Photo / Deg|0.016 / 0.994 / 0.726|
|Regen-Diff / DiffP / VAE / KLVAE / 2xDiff / 4xDiff|0.407 / 0.397 / 0.464 / 0.964 / 0.222 / 0.171|
|AdvEmbG-KLVAE8 / RN18 / CLIP / KLVAE16 / SdxlVAE|0.152 / 0.898 / 0.853 / 0.751 / 0.835|
|AdvCls-UnWM&WM / Real&WM / WM1&WM2|0.291 / 1.000 / 0.253|
|Ours n=5 / 10 / 20 / 50 / 100 / 200 / 500 / 1000 / 2000 / 5000|**0.000** / **0.000** / **0.000** / **0.000** / **0.000** / **0.000** / **0.000** / **0.000** / **0.000** / **0.000**|
As can be seen, our removal lowers TPR@0.1%FPR to near 0, more effective than any attacks in [2].
> Include forgery results for other watermarks to provide a more comprehensive evaluation.
|Method/Metric|Setting|NoForgery|5|10|20|50|100|200|500|1000|2000|5000|
|---|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|ROS (Bit Acc)|Graybox|0.506|0.990|0.988|0.987|0.987|0.987|0.988|0.989|0.988|0.988|0.988|
|RAW (AUC)|Graybox|0.500|0.732|0.732|0.730|0.727|0.723|0.722|0.721|0.721|0.721|0.721|
|SVD (Bit Acc)|Graybox|0.520|0.568|0.558|0.554|0.562|0.563|0.560|0.561|0.559|0.562|0.557|
|ROS (Bit Acc)|Blackbox|0.506|0.900|0.926|0.953|0.968|0.978|0.982|0.984|0.983|0.983|0.982|
|RAW (AUC)|Blackbox|0.500|0.010|0.018|0.038|0.051|0.069|0.085|0.127|0.184|0.222|0.243|
|SVD (Bit Acc)|Blackbox|0.520|0.530|0.530|0.510|0.528|0.535|0.480|0.517|0.488|0.493|0.517|
- **RoSteALS (ROS)**: highly effective (0.9+ bit acc), suggesting RoSteALS adds linearly content-agnostic watermarks
- **RAWatermark (RAW)**: moderately effective under graybox (0.72 AUC) but behave like removal under blackbox (0.25 AUC)
- **DwtDctSvd (SVD)**: ineffective forgery, suggesting the effective removal could be due to $\delta_w$ disrupting the decoding space (similar to AdvCls-WM1&WM2 in [2])
**References**
[1] M. Saberi et al., Robustness of AI-image detectors: Fundamental limits and practical attacks, 2023.
[2] B. An et al., Benchmarking the robustness of image watermarks, 2024.
---
Rebuttal 2:
Comment: > The spoofing attack described in [1] requires access to the watermark encoder (white-box) and does not generalize to removal, while our steganalysis-based approach unifies removal and forgery without accessing the decoder (black-box).
I understand the point of authors about their attack being black-box, in contrast to the spoofing attack from [1]. However, the spoofing attack is very similar to the steganalysis attack from this paper, and while [1] does not discuss on generalization of the attack on watermark removal, I don't see a reason why it wouldn't work.
> It requires significantly fewer images than adversarial attacks, further showcasing its advantage over existing approaches.
Reading the authors' response, I accept that their method, while having similarities to the adversarial attack, has its own advantages and disadvantages (only working on content-unaware watermarks).
I still believe that the vulnerability of content-unaware watermarks to this attack is trivial (especially in case of the gray-box setting). However, for the black-box setting, where non-paired samples for watermarked and non-watermarked images are used, there is more novelty and contribution. I appreciate the authors for adding new results for comparison to the attacks from [2], and the forgery results. I will increase my score to 5 (borderline accept).
[1] M. Saberi et al., Robustness of AI-image detectors: Fundamental limits and practical attacks, 2023.
[2] B. An et al., Benchmarking the robustness of image watermarks, 2024.
---
Rebuttal Comment 2.1:
Comment: Thank you for your follow-up comments and recognizing our work's strengths and contributions. We appreciate the increased score.
Regarding your concerns:
> The spoofing attack is very similar to the steganalysis attack from this paper, and while [1] does not discuss on generalization of the attack on watermark removal, I don't see a reason why it wouldn't work.
[1]'s spoofing method requires **(1)** access to the model with **(2)** the original watermark key (bit sequence/Tree-Ring pattern), which limits it to whitebox attacks. In contrast, our method is capable of performing blackbox attacks, as it extends beyond [1] by utilizing steganalysis to extract content-agnostic watermarks. This also offers better explainability by enabling the visualization of watermark patterns extracted.
We generalized [1] to watermark removal with the following setup:
$$
\text{Original spoofing: \quad} x_{\text{spoofed}} = \beta x_{\text{clean}} + (1-\beta) w, \quad \beta = 0.7;
$$
$$
\text{Generalized removal: \quad} x_{\text{removed}} = \beta x_{\text{w}} + (1-\beta) w, \quad \beta = 1.3.
$$
Our black-box method reduced Tree-Ring's AUC to 0.241 with a PSNR of 29.79dB (before and after removal), while [1]'s removal only achieved 0.929 AUC with 22.71dB PSNR. **This demonstrates that [1] is ineffective at removing Tree-Ring watermarks, whereas our method achieves superior removal with less quality degradation.**
> I still believe that the vulnerability of content-unaware watermarks to this attack is trivial (especially in case of the gray-box setting).
1. We're the first to identify and systematically evaluate the content-agnostic vulnerability in modern watermarking systems.
2. Our finding, though simple, has broad implications for various watermarking methods. Highlighting this blind spot is crucial for guiding future research towards steganalysis-secure approaches.
3. Our graybox setting provides explainability to the content-agnostic issue. It enriches the application scenario and serves as a complement of the attack system.
[1] M. Saberi et al., Robustness of AI-image detectors: Fundamental limits and practical attacks, 2023.
---
Reply to Comment 2.1.1:
Comment: Please feel free to raise any further concerns or questions. We would be glad to provide additional details. | Rebuttal 1:
Rebuttal: ### Common Response
We appreciate the reviewers' comments and the opportunity for rebuttal. Here we would like to clarify the significance and contributions of our work.
**Our contributions**
- **We introduce the first blackbox, training-free method** that successfully removed and forged Tree-Ring watermarks. This method integrates watermark removal and forgery into a single operation, enhancing diversity in watermark analysis.
- **We offer a new perspective on understanding watermarks**: By classifying watermarks into content-agnostic and content-adaptive, we provide explainability. We visualize the extracted content-agnostic patterns (such as the ripple-like pattern in Tree-Ring or the vertical bars in DwtDctSvd's $C_B$ chroma). This perspective helps explain why some watermarks are vulnerable to steganalysis.
- **We highlight a future direction for robust watermarking: addressing the content-agnostic issue.** We identify the recurring problem where recent works, despite their methodological complexity, still add fixed patterns without considering the image content. For example, we visualized that a Tree-Ring pattern could propagete through a DDIM diffusion sampling process still being content-agnostic, and watermark decoding relies on this pattern. This allows a sufficiently simple steganalysis to easily deceive several modern watermark detectors, including DNN-involved methods like Tree-Ring, RAWatermark, and RoSteALS. We provide security guidelines, recommending that future researchers include additional evaluations against simple steganalysis attacks, contributing to the security and advancement of watermarking community. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Gradient Guidance for Diffusion Models: An Optimization Perspective | Accept (poster) | Summary: This paper investigate gradient guidance for adapting or fine-tuning pre-trained diffusion models from an optimization perspective.
The author proposed a look-ahead loss based gradient guidance and two variants of diffusion-based generative optimization algorithms utilizing it.
The author provided theoretical guarantees for adapting/fine-tuning diffusion models.
Strengths: Overall, although this is a theory paper, it is very impressive. It used theoretical approaches to explain some problems of diffusion with guidance:
1. naive guidance doesn't work on latent structure of data
2. algorithm on guided diffusion using gradient queries on new samples
3. proposed an adaptive gradient guided diffusion, where both pre-trained score network and guidance are iteratively updated using self-generated samples.
Weaknesses: No obvious weakness.
If possible, please include the reason for each step in your proof, i.e. the first step follows from???, the second step follows from???. Then it will be a perfect paper. I will increase the score if you can fix this issue.
Technical Quality: 4
Clarity: 4
Questions for Authors: No questions
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your recognition of our theoretical contributions and your valuable suggestions! We've added more detailed explanations of the derivations in the proof. | Summary: This paper proposes a new approach to the problem of gradient-guided generation for diffusion models. The main challenge of gradient-guided generation is to maintain the generated sample within the support of the sample distribution. To address this issue, this work starts with a simplified model, using a linear objective function and a low-dimensional linear sample space, and provides a closed form for the gradient guidance. Under this simplified model, the paper offers a theoretical analysis to show the converged distribution of the generated samples for concave and smooth objective functions. Lastly, it also proposes another variant that allows fine-tuning over pretrained models, further improving optimization performance
Strengths: 1. The idea of deriving this form of gradient guidance is interesting and novel, particularly the use of a low-dimensional linear sample space to demonstrate the problem of out-of-distribution in gradient-guidance sampling.
2. The theoretical results are solid, and I appreciate the finding that the proposed method can indeed preserve the low-dimensional linear structure theoretically.
Weaknesses: 1. Although I appreciate the motivation and analysis in this work, I do not think this method is practical in reality. Firstly, the core advantage of gradient-guidance approaches like [1][2] is that they do not require gradient backpropagation over the neural network, thus adding only minor computational cost to direct sampling. In contrast, the method proposed in this work requires gradient backpropagation over the neural network, making it prohibitively slow. As mentioned in the paragraph above Algorithm 1, guiding stable diffusion requires 76 minutes for optimization. Secondly, since the method requires gradient backpropagation over the neural network, I think the author should compare it to the serial works on direct optimization in [3][4][5], which are completely ignored in this work. These works target the same task and have the same computational requirements. From my understanding, the algorithm proposed in this work cannot rival direct optimization methods, as the latter do not rely on simplified models to derive the algorithm. In fact, a concurrent work [6] shows that one can optimize SDXL, a 3B parameter model, within 10 minutes, which is in sharp contrast to the performance of Algorithm 1 proposed in this work.
2. The theoretical analysis is certainly novel, but it heavily relies on the linear sample space assumptions. It is unclear whether this method can be applied to cases where the sample space is highly nonlinear. The author conducted a simple experiment with image diffusion, which somewhat investigates this point, but I think the author should also conduct experiments on a highly nonlinear synthetic data distribution to validate if their method works in this scenario.
3. The writing and presentation of the paper can be improved. For example, when moving from Section 3 to Section 4, the target objective function switches from a linear function to a general function without much explanation. I can only guess from Algorithm 1 that the gradient
$g$ is directly replaced by some stochastic gradient estimator of a general function $f$? I think the author should provide some explanation on this point.
[1] Song, Jiaming, et al. "Loss-guided diffusion models for plug-and-play controllable generation." International Conference on Machine Learning. PMLR, 2023.
[2] Chung, Hyungjin, et al. "Diffusion posterior sampling for general noisy inverse problems." arXiv preprint arXiv:2209.14687 (2022).
[3] Bram Wallace, Akash Gokul, Stefano Ermon, and Nikhil Naik. End-to-end diffusion latent optimization improves classifier guidance. In Proceedings of the IEEE/CVF International Conference on
Computer Vision, pages 7280–7290, 2023b.
[4] Heli Ben-Hamu, Omri Puny, Itai Gat, Brian Karrer, Uriel Singer, and Yaron Lipman. D-flow:
Differentiating through flows for controlled generation. arXiv preprint arXiv:2402.14017, 2024.
[5] Korrawe Karunratanakul, Konpat Preechakul, Emre Aksan, Thabo Beeler, Supasorn Suwajanakorn,
and Siyu Tang. Optimizing diffusion noise can serve as universal motion priors. arXiv preprint
arXiv:2312.11994, 2023.
[6] Tang, Zhiwei, et al. "Tuning-Free Alignment of Diffusion Models with Direct Noise Optimization." arXiv preprint arXiv:2405.18881 (2024).
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. I wonder how the algorithm proposed in this work compares to the direct optimization approaches in [3][4][5][6] (references mentioned above)?
2. Could the author conduct some simple experiments on synthetic data with a nonlinear sample space? I am curious how well the proposed method will perform when the linear assumption is violated.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: See my comments above. In summary, the main limitations of this work are:
1. Incomplete literature review and comparison with existing works.
2. The performance of the proposed algorithm appears to be impractical.
3. The presentation can be improved.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Q1:** The proposed method is prohibitively slow due to gradient backpropagation over the neural network. Other approaches like [1][2] do not require gradient backpropagation over the neural network.
**A1:** There seems to be some confusion. Your understanding is incorrect. Both [1] and [2] require backpropagation through the score network. Please refer to:
- In [1], Eqn.2 includes score network, and Eqn.8 takes gradient over it.
- In [2], Eqn.10 includes score network, and Eqn.16 takes gradient over it.
Our algorithm is as efficient as other gradient-based guidance methods while enjoying additional theoretical guarantees. Please see our **new Table 1 in [pdf][pdf-link]** for a detailed analysis of our runtime efficiencies.
>**Q2:** The method can't rival direct optimization. These works target the same task and have the same computational requirements.
**A2:** Thank you for pointing us to these papers. They are relevant and we have added discussions about these works in our **expanded literature review** in Sec 4 of [pdf][pdf-link].
- **We target more general tasks and focus on theory.** We respectfully disagree that our method and direct optimization target the same tasks. Indeed they are relevant. However, we consider general optimization over data's latent manifold. Our Section 5 provides the first methodology and theory for adapting diffusion models with self-play and online data collection to **find the global optimum of a user-provided objective function**.
- **They don't have the same computation requirement: Direct optimization has a high memory burden and ours does not.** While our gradient guidance method enables sampling using $O(1)$ memory, direct optimization methods [3-6] suffer from **O(T)** memory, a substantially higher burden, because they need to backpropagate through the ODE solver, which requires storing all intermediate gradients when utilizing the chain rule. For example, [4] states: *"backpropagating through the solver can be expensive in memory"*, similarly, [5] notes: *"maintaining the intermediate activations for solving the ODE during backpropagation can be memory-intensive. This issue can be addressed with gradient checkpointing or an invertible ODE, at the cost of more computation or model complexity."* Thus, our method is much more light-weighted.
>**Q3:** A concurrent work DNO [6] shows that one can optimize SDXL, a 3B parameter model, within 10 minutes, which is in sharp contrast to the performance of Algorithm 1 proposed in this work.
**A3:** Thank you for pointing out this paper. After carefully checking their paper, precise running time was only reported for SD1.5 (Table 1 in DNO[6]), the same model tested in our experiments. For SDXL, there's only a figure with a vague runtime axis, prevents direct comparision.
We added a **new Table 1 [our rebuttal][re-link] for analysis of computation efficiency**. For adapting the SD 1.5 model, our method takes 15.8 seconds per optimization round, and <5 iterations to converge (as shown in Figure 6). **Our total runtime is < 2min**, which is comparable to the 10min runtime reported by Table 1 in DNO .
>**Q4:** Could you conduct some simple experiments on synthetic data with a nonlinear sample space?
**A4:** Thanks for the suggestion. We have conducted **additional experiment (Sec 1 in [pdf][pdf-link] )** on data on nonlinear space. Our new result demonstrates that our guidance better preserves the structure of nonlinear manifolds than naive gradient guidance.
>**Q5:** When moving from Section 3 to Section 4, why does the target objective function switch from a linear function to a general function?
**A5:** Yes our task is to generate solutions for general nonlinear, concave optimization problems. Our Section 3 uses the linear model as a motivating example to derive the gradient guidance. Our Sections 4, 5 provide our main results for general nonlinear optimization.
References:
[1] Song, Jiaming, et al. "Loss-guided diffusion models for plug-and-play controllable generation." International Conference on Machine Learning. PMLR, 2023.
[2] Chung, Hyungjin, et al. "Diffusion posterior sampling for general noisy inverse problems." 2022.
[3] End-to-end diffusion latent optimization improves classifier guidance. ICCV, 2023.
[4] D-flow: Differentiating through flows for controlled generation, 2024.
[5] Optimizing diffusion noise can serve as universal motion priors, 2023.
[6] Tuning-Free Alignment of Diffusion Models with Direct Noise Optimization. arXiv preprint, 2024.
[re-link]: https://openreview.net/forum?id=X1QeUYBXke¬eId=WsELdf2puF
[pdf-link]: https://openreview.net/attachment?id=WsELdf2puF&name=pdf
---
Rebuttal Comment 1.1:
Title: Thanks for your rebuttal
Comment: I would like to thank the author for addressing some of my concerns and correcting my mistake. Thus I decided to increase my score by 1. However, I still do not lean towards acceptance for two reasons.
1. The design of the algorithm and theory, which are claimed to be the main contribution of this work relies on a simplified model (linear or concave). This makes it unclear what the fundamental factor to drive the improvement over other baselines like LGD, and also dims the meaningfulness of the theoretical results. Besides, I think several aspects of algorithm design also appear in the works on gradient guidance.
2. I disagree with the author's comment "We respectfully disagree that our method and direct optimization target the same tasks. Indeed they are relevant. However, we consider general optimization over data's latent manifold. Our Section 5 provides the first methodology and theory for adapting diffusion models with self-play and online data collection to find the global optimum of a user-provided objective function." Direct optimization method also targets the scenario "adapting diffusion models with self-play and online data collection to find the global optimum of a user-provided objective function.", and hence this work is not "the first methodology". As far as I can tell, all the experiments in the current manuscript can also be tackled by the Direct optimization method. While saying so, I do agree that the gradient guidance method has lower time complexity. I know it could hard to run the direct numerical comparison during the short rebuttal period. But I do wish the author to acknowledge the importance of this comparison because it is still possible that: The direct optimization method, while having high time-complexity for each gradient step, can still have a better reward-time trade-off compared to the gradient-guidance method.
---
Rebuttal 2:
Comment: Hello thanks for the response! We are glad that **your primary concern about backpropagation is now resolved**. In regards to further comments:
> "the main contribution of this work relies on a simplified model (linear or concave). "
Sorry there could still be a major misunderstanding. Our contribution does not **rely** on a simplified model.
1. The simplified model is only used to **motivate** the design of gradient guidance. We choose to use a simple linear model to intuitively explain why certain forms of guidance are better than others.
But our results are not limited to the simplified model, as follows.
2. **Theorem 1** holds for **arbitrary** distributions and does **not** require a linear score model.
3. The guidance term constructed in our paper, G_loss (Eqn 7 and Eqn 11 in submission), applies to any pre-trained score network. It is not limited to linear models.
4. Our first experiment (Sec 6.1 of submission) uses a 15M-parameter, nonlinear, U-Net score network.
5. Our second experiment (Sec 6.2) generated human-interpretable images, validating that our method preserves the latent manifold structure underlying image data.
6. The image experiment (Sec 6.2) also validated the method on nonconcave objectives.
7. Per your request for additional nonlinear experiments, we have provided a new set of experiments for synthetic data on nonlinear, spherical manifolds (**Sec 1 of [pdf][pdf-link]**).
While our methodology extends beyond "simplified models", we intentionally begin with simple mathematical models as the foundation for exploring complex theories. Simple models, such as linear models, have been fundamental to the developments of deep learning and diffusion model theories, e.g., [38] Marion et al, 2024. Philosophically, mastering these basic models often provides the deepest insights, most practical utility, as well as generalizability. Therefore, we firmly believe that robust theoretical research must start with simplicity.
> “several aspects of algorithm design also appear in the works on gradient guidance”
Our paper focuses on theory. As we already explained, gradient guidance via backpropagation is a common practice (universal guidance does it; [1],[2] also does it). Our contribution is to provide theoretical motivation for this design and establish the first optimization convergence theory.
> “Direct optimization method also targets the scenario "adapting diffusion models with self-play and online data collection to find the global optimum of a user-provided objective function.", and hence this work is not "the first methodology".
By “the first methodology”, we mean the first methodology to provably solve optimization problems with convergence/optimality guarantees, with both theory and experiments. We are not aware of any other method that probably adapts a pretrained diffusion model to solve optimization problems to global optimum.
We appreciate that you now agree that guidance methods and direct optimization “have different computation requirements”. They are different approaches in nature, and fully understanding their limits would be future work. While our focus is guidance, we appreciate that you highlight this alternative technological route. We’d be happy to incorporate more discussions about it in our related work section.
**Could you please point us to the specific literature on direct optimization with self-play and online data collection to find global optimum of any objective function?** It’s possible that we have missed something. We are happy to go over them and include them in our literature review. Thank you!
- Please also note that the discussion period will end soon today. So we might not be able to respond again, but we appreciate any constructive comment!
[1] Song, Jiaming, et al. "Loss-guided diffusion models for plug-and-play controllable generation." International Conference on Machine Learning. PMLR, 2023.
[2] Chung, Hyungjin, et al. "Diffusion posterior sampling for general noisy inverse problems." arXiv preprint arXiv:2209.14687 (2022).
[38] Marion et al. Implicit diffusion: Efficient optimization through stochastic sampling, 2024
[pdf-link]: https://openreview.net/attachment?id=WsELdf2puF&name=pdf | Summary: Under the assumption of the data belonging to a low-dimensional linear subspace, the authors investigate two common gradient-based guidance techniques of diffusion models, encouraging the use of one of them (computing the gradient at the estimate of x0 given xt, as done in many works).
For concave reward, and under the additional assumption of linear score function, they also establish convergence results for the mean of the guided process.
Strengths: Overall providing theory for guided diffusion is appreciated.
The paper formally shows an advantage of a certain guidance over the other.
The presentation is clear and the paper is convenient to read.
Weaknesses: The assumptions made in the paper are strong (as discussed below).
A certain weakness of Theorems 2 and 3 is that they consider only convergence of the mean of the procedure do not say anything about the variance.
There are no new algorithmic ideas in the paper, as the promoted gradient-based guidance is known and common, and also there exist works on fine-tuned/adaptive diffusion models. Thus, the authors needs to tone down some claims in the contribution list and in the abstract.
More comments are stated below.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1.
The tight connection between gradient-guided diffusion models and proximal optimization algorithms is discussed in detail in:
Tomer Garber and Tom Tirer, "Image Restoration by Denoising Diffusion Models with Iteratively Preconditioned Guidance," CVPR, 2024.
This relevant literature should be mentioned.
2.
The statement in line 48:
"Why does naively guiding diffusion models using gradient never work in practice?"
Is not clear, as there are leading approaches with gradient-based guidance that work, such as (Garber and Tirer, 2024) and the back-projections [53] / least-squares [14] methods that it generalizes.
Only much later in the paper, the reader can see that the authors do not refer to **the common practice** of computing the gradient at the estimate of x0 given xt as a plain gradient-based guidance. This should be stated clearly already at early stage.
This also means that you need to tone down the claim in the contribution list that you: "introduce guidance based on forward prediction loss", because this is already a well-known practice.
3.
Based on your Eq. 1, it seems that you consider the variance preserving (VP) SDE formulation.
Why do you state in line 85 that without loss of generality all your analysis is done for q=1?
This needs to be explained, at least in the appendix, as q=1 is not the common setting (but rather some linearly increase from near 0 to 1).
4.
There seems to be some discrepancy between your Eq. 2, associated with Eq. 1, and the SDE formulation of diffusion models [50]. Recall that dt in Eq. 2 is negative.
5.
The "general" explanation in Section 3.2 that naive gradient-based guidance doesn't work is not convincing because it ignores that fact that the step-size decreases as the t get closer to t=0 and that the noise injection can mitigate error propagation.
6.
You state: "Alg. 1 is light-weighted ... takes 76min overall." Recall that the reverse process is not performed in the offline pretraining stage. So obviously Alg. 1 is quite slow. Indeed, it is known that guidance methods that include the DNN's Jacobian computation are slow.
7.
The assumptions of the theoretical analyses in the paper are quite strong.
Especially Assumption 1 on the signal belonging to low dimensional linear subspace and the assumption on linear score model (Eq. 12).
Already in the contribution list you should state the assumption on the score model, which essentially leads to convex optimization.
8.
Any idea how can Theorem 1 be generalized to low dimensional manifold rather than low dimensional linear subspace?
Under the assumptions, in what aspects Theorem 2 differs from standard convex analysis? (maximization of an L-smooth concave function?).
Aren't there any other works that study convergence of guided diffusion models except Marion et al. [38]? There are several works on convergence of diffusion models, e.g., (De Bortoli, 2022). What prevents extending them to the guided case?
Valentin De Bortoli, "Convergence of denoising diffusion models under the manifold hypothesis," TMLR, 2022.
9.
Under the subspace assumption 1, your data covariance matrix is a DxD matrix of rank d<D.
Therefore it is not invertible. I suggest using the dagger symbol to avoid confusion between inverse and pseudoinverse. Provide more details in the proof to clarify that this issue is taken into account.
Note that there are some peculiar differences between inverse and pseudoinverse, e.g., generally, pinv(AB) is not equal to pinv(B)pinv(A).
10.
Regarding Section 5, you need to clearly state what are the differences of Alg 2 from many other works that fine-tune pretrained diffusion models (just google "adaptive diffusion" and "diffusion personalization").
11.
What is the motivation for convergence to the **global** optimum of the guidance objective if one pays in ignoring/omitting the prior information / regularization of the score model? (As seems to be promoted in Sections 5 and 6).
The tradeoff between reward/loss and regularization is trivial and clearly in most case there is some delicate balance between them that a user should look for.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for appreciating our effort in providing theory for guidance. We first respond to your main concerns.
>**Weakness 1** Assumptions made are strong. Any idea how can Theorem 1 be generalized to low dimensional manifold rather than low dimensional linear subspace?
**A:** Given the challenge in developing theory for guided diffusion, adopting linear assuptions on data subspace and score funtion should be considered reasonable, more detailed discussion can be found in **Linear Assumption** part of [our rebuttal][re-link].
Theorem 1 may have the potential to generalize to low-dimensional manifolds. For low-dimensional manifolds, the local geometry around any point can be approximated by its linear tangent space, and current Thm1 shows the guidance lies in this tangent space. By controlling the strength of guidance, the guidance vector on the tangent space will remain close to the manifold.
> **Weakness 1 (Cont.)** In what aspects Thm 2 differ from standard convex analysis? There are works on convergence of diffusion models, e.g., (De Bortoli, 2022). What prevents extending them to the guided case?
**A:** **Diffusion theory cannot be simply extended to guided diffusion.** The addition of a guidance term disrupts the dynamics of the reverse SDE, thus the output distribution cannot be characterized. Previous theoretical work, such as [17](De Bortoli, 2022), heavily relies on the fact that distribution errors stem from approximation errors in the score function and the initial distribution of SDE. However, the training-free approach doesn't provide an approximation of $\nabla_{x_t} \log p_t(y|x_t)$ but instead introduces a plugin, implementation-convenient gradient guidance term. This discrepancy creates a barrier to directly extending existing theories to guided diffusion. [38](Marion et al, 2024) is the only work we know involving the theory of guided diffusion dynamics, and it also requires the linear score assumption. Therefore, this assumption should be considered reasonable. **Our key insight is characterizing the output sample distribution of guided diffusion as taking a proximal gradient step.** We derive the reverse process as a proximal step. This established relationship forms the premise for optimization and distinguishes our analysis from conventional convex optimization.
>**Weakness 2:** A certain weakness of Theorems 2 and 3 is that they do not say anything about the variance.
**A:** This is a misunderstanding, you might have missed the formula we provided for variance in line 235, which points to Eqn 25 in Appx E. This formula demonstrates that the variance of the output distribution is smaller than that of the training data (empirical covariance). Similarly, for Theorem 3, we present the variance in Eqn 31 of Appx E.3.
>**Weakness 3:** There are no new algorithmic ideas in the paper, as the method is common, and also there exist works on fine-tuned/adaptive diffusion models.
**A:** Algorithmically, we introduce an iterative optimization algorithm that apply the prompted gradient guidance to the local linearization of objective function(Alg 1). This approach is new within the optimization context. In addition, our main focus is more on theoretical understaning for guided diffusion than proposing new method.
Response to your other questions:
>**Q1:** The statement in line 48 on "naive gradient guidance" is not clear. The explanation in Sec 3.2 that naive gradient doesn't work is not convincing because it ignores that fact that the step-size decreases and that the noise injection can mitigate error propagation.
**A1:** We appreciate the reviewer's suggestion, we added a clear definition for "Naive gradient guidance" around line 48. To give a more convincing explanation for the failure of naive gradient-based guidance, we add a new lemma showing naive gradient suffers at least a constant error:
**Lemma(Failure of naive guidance)** For naive guidance $\texttt{G}(X_t^{\leftarrow}, t) = b(t) \nabla f(X_t^{\leftarrow})$, suppose $b(t) >b_0>0$ for $t >t_0.$ For data in subspace under Assumption 1 and reward $f(x)=g^\top x$, $g \perp Span(A)$ with $h(t)=1-\exp(-\sqrt{t})$, then the orthogonal component of the generated sample is consistently large:
$$
\mathbb{E} [X_{T,\perp}^{\leftarrow}] = C g, \quad C > \exp{\left(-5/2\right)}b_0.
$$
>**Proof.** Under Assumption 1, the score can be decomposed to terms parallel and orthogonal to Span(A) (Prop 2, Appx D.3) Applying naive guidance, we examine the orthogonal reverse process:
$$
\mathrm{d} X_{t, \perp}^{\leftarrow} =\left[\frac{1}{2}-\frac{1}{h(T-t)}\right] X_{t, \perp}^{\leftarrow}\mathrm{d} t + b(t)g \mathrm{d} t+\left(I_D-A A^\top\right) \mathrm{d} \overline{W}_t.
$$
>Solving this SDE, we get the expectation of the final state following $\mathbb{E}[X_{T, \perp}^{\leftarrow}] = \int_0^T \exp \left(- \int_0^{t}h^{-1}(s)\mathrm{d}s \right) e^{t/2}b(T-t) g \mathrm{d}t$. For $h(t) = 1 - \exp(-\sqrt{t})$, we have the coefficient of direction $g$ is larger than $\int_0^T \exp(-t/2-2\sqrt{t})b(T-t)\mathrm{d}t > \int_{0}^1 \exp(-5/2)b_0 \mathrm{d}t >0$ where we can assume $T>1$. Thus, $\mathbb{E}[ X_{T, \perp}^{\leftarrow}] \neq 0$. This means the generated sample is going out of the subspace, i.e., naive gradient guidance will violate the latent structure.
[re-link]: https://openreview.net/forum?id=X1QeUYBXke¬eId=WsELdf2puF
[pdf-link]: https://openreview.net/attachment?id=WsELdf2puF&name=pdf
---
Rebuttal 2:
Comment: >**Q2:** The tight connection between gradient-guided diffusion models and proximal optimization algorithms is discussed in detail in Garber and Tirer (2024). This relevant literature should be mentioned.
**A2:** This work is certainly related and has been included in our revised literature review (Sec 4 of [pdf][pdf-link]). However the connection between the proximal-gradient and gradient-based guidance discussed there is different from the connection we made. The connection in their paper is an optimization trick used to propose a two-step alternating algorithm for guided generation, which is classic in the research line of PnP (Plug in and Play). In contrast, our connection is an intuitive understanding distilled from our optimization theorem: pretrained model has an effect of regularization, guidance serves to optimize the objective.
> **Q3:** Is the algorithm slow?
**A3:** There might be misunderstanding on the reported "76 min" running time. We newly added a computation efficiency analysis with detailed running time records, please refer to **Table 1** and **computational efficiency** section in [our rebuttal][re-link] for more details.
>**Q4:** What's the difference between Alg.2 and other fine-tuning diffusion models works?
**A4:** Alg.2 simultaneously fine-tunes the pretrained model and adds a guidance term during inference. In contrast, existing works on fine-tuning diffusion models usually involves only fine-tuning the model weights.
>**Q5:** What is the motivation for considering convergence to the global optimum of the objective if one pays in ignoring the prior information of the score model?
**A5:** In our optimization framework, our goal is to find the global optimum **within the subspace**, which is also prior information or the underlying constraint. Thm 3 proves the exact convergence to the global optimum within the subspace. For example, in protein design, the generated samples should closely mimic natural proteins and comply with biological principles. Failure to do so can result in unstable structures, significantly degrading performance metrics. We agree that to which extent we balance reward optimization with regularization depends on the specific use case.
>**Q6:** Why do you state without loss of generality that $q=1$ for the noise schedule in line 85?
**A6:** Our theoretical work actually holds for all general values of $q$, not just $q=1$. We appreciate your observation and we've clarified this point in our revision.
>**Q7:** Why are Eqn 2 and Eqn 6 in [5] Song et al. (2020) inconsistent?
**A7:** In our Eqn 2, $X_0$ represents the standard Gaussian noise distribution, whereas in Song et al.'s Eqn 6, $X_T$ represents the noise distribution. This difference in notation leads to a discrepancy in the $\mathrm{d}t$ term, specifically in its sign.
>**Q8:** Suggests using the dagger symbol to distinguish between inverse and pseudoinverse.
**A8:** Thank you for the suggestion. We will add more clarification in the revision. Regarding the data covariance matrix, it has the form $A\Sigma_u A^T$, where $\Sigma_u$ is full rank and the columns of $A$ are orthonormal, thus the pseudoinverse is $A\Sigma_u^{-1} A^T$.
[re-link]: https://openreview.net/forum?id=X1QeUYBXke¬eId=WsELdf2puF
[pdf-link]: https://openreview.net/attachment?id=WsELdf2puF&name=pdf
---
Rebuttal Comment 2.1:
Comment: I thank the authors for their response, which addresses most of my concerns.
As mentioned above, I still believe that the claims on the algorithmic novelty should be toned done (e.g., the core guidance idea is common), and that already in the contribution list the strong assumption on the linear score model (Eq. 12) should be stated.
I decided to increase the rating, provided that the next version will include these changes, as well as discussion on the connection of gradient-guided diffusion schemes and proximal optimization methods that is stated in exiting work (where "pretrained model has an effect of regularization" as well).
---
Reply to Comment 2.1.1:
Title: Author Reply to Reviewer 2yVt
Comment: > I thank the authors for their response, which addresses most of my concerns.
We are glad that our response addressed most of your concern and you increased the rating. Thank you again for raising some insightful questions, which help us improve the paper. With that being said, we will include in the revised text version: a discussion on similar guidance designs in existing literature, mentioning assumptions made in contribution list, as well as a discussion on the previously made connection between gradient-guided diffusion and proximal optimization, and its differece to the connection drawn in our paper. | Summary: This paper rethinks the gradient-based guidance methods through the optimization perspective. Similar to the manifold assumption, the authors first rely on the assumption that the observed data is from the lower dimensional space. Then, they claim that the naive gradient guidance does not maintain the data subspace, and show how to preserve the data subspace and incorporate guidance by using the look-ahead formulation. For the proof, they rely on the Gaussian linear model. From such motivations from theoretical results, they propose two versions of gradient guidance methods: 1) Only update the gradient guidance components and 2) Simultaneously finetuning the score model. These formulations enable the convergence to the data subspace, unlike naive guidance formulation. Experimental results are conducted on toy examples and image generation settings.
Strengths: - Strong theories are presented in this paper. Rethinking the guidance formulation as the optimization procedure, that aims to optimize 1) guidance and 2) convergence of the data subspace, is interesting.
- The paper is well-written and easy to follow.
Weaknesses: - While the proposed algorithm demonstrates considerable theoretical value, its practical implementation appears to be quite slow. This could be perceived as a weakness of the paper.
- The paper discusses Universal Guidance, yet it omits mention of similar training-free [B, C, D, E, F, G] and training-required guidance [A] mechanisms for utilizing off-the-shelf models. Including a discussion on these mechanisms would enhance the comprehensiveness of the paper. Additionally, it would be beneficial to address the practical and theoretical advantages compared to these mechanisms. As it stands, the paper does not sufficiently clarify these comparative advantages, which leaves some ambiguity about the relative strengths compared to these methods.
Technical Quality: 3
Clarity: 4
Questions for Authors: - The paper relies heavily on the linear assumption. How reasonable is this assumption in the context of the proposed algorithm? Aren't guidance classifiers and score functions typically more complex than linear functions?
## **Reference**
- [A] Towards Practical Plug-and-Play Diffusion Models, CVPR 2023.
- [B] ELUCIDATING THE DESIGN SPACE OF CLASSIFIER GUIDED DIFFUSION GENERATION, ICLR 2024
- [C] ADJOINTDPM: ADJOINT SENSITIVITY METHOD FOR GRADIENT BACKPROPAGATION OF DIFFUSION PROBABILISTIC MODELS, ICLR 2024.
- [D] Towards Accurate Guided Diffusion Sampling through Symplectic Adjoint Method, Arxiv 2023.
- [E] Loss-guided Diffusion Models for Plug-and-Play Controllable Generation, ICML 2023
- [F] Manifold preserving guided diffusion, Arxiv 2023
- [G] Freedom: Training-free energy guided conditional diffusion model, ICCV 2023.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Linear approximation of the score function in their proof is mentioned as the limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for acknowledging our theoretical value and other comments! We first respond to your main concerns.
>**Weakness 1:** While the proposed algorithm demonstrates considerable theoretical value, its practical implementation appears to be quite slow.
**A1:** During rebuttal, we add a new running time analysis for our experiments, please refer to **Table 1** and the **computational efficiency** section in [our rebuttal][re-link] for more details.
>**Weakness 2:** The paper discusses Universal Guidance, yet it omits mention of other similar training-free and training-required guidance mechanisms. Additionally, it would be beneficial to address the practical and theoretical advantages compared to these mechanisms.
**A2:** We appreciate your mention of these related works that were omitted, and we've already included them in revision (Sec 4 in [pdf][pdf-link]) and discussed our practical and theoretical advantages compared to them. Here is a summary of the comparison.
- First of all, to adapt a pre-trained diffusion model for guided generation, there are training-required methods [A] and training-free methods, which can be further classified into gradient guidance methods [E,F,G,B] and latent optimization methods [C,D]. Our method falls into gradient guidance methods.
- Generally, classifier guidance [A] requires training/fine-tuning the classifier on noisy input even based on an off-the-shelf classifier, introducing extra training cost than training-free methods. Latent optimization methods are also training free, but are memory-intensive due to the need to store all intermediate gradients when utilizing the chain rule.
- Within gradient-based methods, our work focuses on the theoretical aspect, providing understanding of the guidance method in [E,F,G,B]. Additionally, in our work, we are the first to iteratively apply the gradient guidance as a module for solving optimization problems, and provide theoretical guarantees.
Response to your other questions:
>**Question 1:** How reasonable is the assumption? Aren't guidance classifiers and score functions typically more complex than linear functions?
A1: Please note that we do not assume the guidance classifier to be linear in our theorems. The form of our guidance is motivated by Gaussian data + linear classifier but its guarantees do not rely on the classifier to be a linear model. Thm 1 also holds for non-linear score, i.e., **arbitrary** distribution.
We do assume linear score network in Thm 2 and 3, though being restrictive in practice, it is a reasonable assumption for establishing the first optimization theorem for guidance and yielding mathematical insights. The same assumption is also adopted by [38] (Marion et al, 2024), which is the only work we know involving the theory of guided diffusion dynamics.
In addition, the effectiveness of our method is beyond linear assumption, justified by our experiments and new simulations (Sec 1 in [pdf][pdf-link] ) added during the rebuttal. More detailed discussion please refer to **Linear Assumption** section in [our rebuttal][re-link].
[re-link]: https://openreview.net/forum?id=X1QeUYBXke¬eId=WsELdf2puF
[pdf-link]: https://openreview.net/attachment?id=WsELdf2puF&name=pdf
---
Rebuttal Comment 1.1:
Comment: I sincerely appreciate your thorough response. I carefully read all the reviewer's comments and discussions, and my concerns are addressed.
---
Rebuttal 2:
Comment: Dear Reviewer 8pJj,
We sincerely appreciate your insightful review and are glad we have successfully addressed your concerns. If you feel that all your questions have been satisfactorily answered, we would be grateful if you would consider raising the rating.
Thanks again for your time and efforts in this work!
Best regards,
Authors, | Rebuttal 1:
Rebuttal: We appreciate all reviewers for their valuable feedback!
### **New Theory, Experiments, and Running-Time Analysis**
**Theory for the failure of Naive gradient:** We construct a rigorous counterexample Sec 3 in [pdf][pdf-link] showing that the generated samples will suffer at least a constant error if the gradient of reward is directly applied as guidance.
**Experiment with data from nonlinear manifold:** We test our guidance on data from nonlinear manifold Sec 1 in [pdf][pdf-link] and plot the off-support deviation and reward curve. The curve shows that our guidance better preserves the data manifold than naive gradient guidance.
**Break-down analysis for running time:** We report and analyze the running time of our algorithms:
| | Total runtime (iterations) | Per iteration | No guidance |
|------------|:-----------:|:-----------:|:-----------:|
| Simulation | **3.8 min** (50 iter), 76 min (1000 iter) | 4.6 s | 2.6 s |
| Image | **1.3 min** (5 iter) | 15.8 s | 4.9 s |
**Table 1: Runtime Efficiency of Algorithm 1.** **Bold** refers to the total time to converge. No guidance refers to the time for one-time inference of the pre-trained model.
### **Expanded literature review**
We appreciate reviewers pointing us to several papers that are not directly under the topic of "diffusion model for optimization" but yet highly related. We include those in our revised literature review section in [pdf][pdf-link]. Gradient-based guidance and direct latent optimization are two main routes for adapting a pre-trained diffusion model to some reward, in a training-free way. Our method falls into the class of gradient-based guidance, so we recap the comparison to other methods in this class here:
Gradient-based Guidance [A,B,C,D,E] utilize gradient as guidance. [A,B,D] solves inverse problem image and [C, E] solves guided/conditional image generation. [A,C] both propose similar guidance: on the predicted clean data $x_0$ with respect to $x_t$. Differently, our paper provides the first rigorous theoretical study of this gradient-based guidance from an optimization perspective, add prepose the first iterative algorithm that enjoys provable convergence guarantee.
### **Clarification on Our Contribution and Scope**
We want to highlight that our main contribution is to **provide the first optimization guarantee for adapting a pre-trained diffusion model to task-specific needs via gradient guidance**. Specifically: 1) We demonstrated the gradient guidance preserves learnt data structure (Thm 1); 2) We established optimization convergence theory for the gradient-guided diffusion model and its iterative fine-tuned variant (Thm2, 3; Fig 1).
### **Discussion on Linear Assumptions**
- **Linear case is a fundamental setting for theory and is highly nontrivial for theory on guided diffusion.** Extending diffusion theory to guided diffusion is highly nontrivial. The guidance term disrupts the reverse SDE dynamics which previous works in diffusion theory, such as [17] (De Bortoli, 2022), heavily rely on, making it impossible to characterize the output distribution if without further assumptions. [38] (Marion et al, 2024) is the only work we know involving the theory of guided diffusion dynamics, and it also requires the linear score assumption. Therefore, this assumption should be considered reasonable.
- **Practical usage and satisfying properties of our guidance and algorithms extend beyond linear assumption.**
- **For assumption on linear score network:** When deploying, guidance *Gloss* is easily constructed for **any** pre-trained diffusion network, not limited to linear networks. Satisfying optimization performance was validated by experiments (Sec 6.1) with a 15M-parameter U-Net score network.
- **For linear assumption on data manifold:** Theoretically, *Gloss* always preserves the data subspace for **arbitrary** latent distribution, not limited to Gaussians (Lem 1). For nonlinear data manifolds, our experiment (Sec 6.2) generated highly human-interpretable images, suggesting the manifold preserving property of *Gloss* stays for highly nonlinear manifolds. The new simulation in Sec 1 of [pdf][pdf-link] on sphere subspace also verified this property extending beyond linear assumption.
### **Clarification on Computation Efficiency**
Some reviewer misinterpret our discussions on computation efficiency that
> a single run of the backward sampling process (Module 1) takes 4.6s, and Alg. 1 takes 76min overall.
Here "76 min" is the total time for running to 1000 optimization rounds (one round corresponds to one complete reverse process with diffusion model), averaging 4.6 seconds per round in our simulations. Most of our simulations converge within 50 rounds, thus runs only 3.8 min to get satisfying results.
For Stable Diffusion v1.5, our approach needs 15.8 seconds per round. Typically, it achieves satisfactory results in fewer than 5 rounds (Figure 6), totaling 1.3 min. Reads are summarized in the **Table 1**.
**Backpropagation gradient through denoiser has been adopted by multiple gradient guidance method, thus it is NOT a deal breaker for efficiency.** Gradient-based guidance methods like [A,B,C] all require similar gradient backpropagation over the score network, thus our approach is not more computationally expensive in this regard.
References:
[A] Diffusion posterior sampling for general noisy inverse problems, 2022
[B] Loss-guided diffusion models for plug-and-play controllable generation, 2023
[C] Freedom: Training-free energy-guided conditional diffusion model. ICCV, 2023.
[D] Manifold preserving guided diffusion. arXiv preprint arXiv:2311.16424, 2023
[E] Towards accurate guided diffusion sampling through symplectic adjoint method. arXiv preprint arXiv:2312.12030, 2023b
[pdf-link]: https://openreview.net/attachment?id=WsELdf2puF&name=pdf
Pdf: /pdf/f5169e8f7b353fc9f4d8c549a96ff0b8238b00b4.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Invariant subspaces and PCA in nearly matrix multiplication time | Accept (poster) | Summary: The paper analyses complexity of generalized symmetric eigenvalue problem computation.
Strengths: - Solid analysis of the computations involved in generalized eigenvalue problems.
- Excellent survey of related works.
- Relevant applications in ML (PCA).
Weaknesses: - The assumption of H being symmetric needs to be in the Abstract as well, as it is quite significant.
- Missing citation to kernel PCA [1]
- While the analysis is strong, the computational aspect is not as emphasized. For example, Algorithm 1 is not shown in main body and only in Appendix, and the paper has no experimental evaluation.
[1] Schölkopf, Bernhard, Alexander Smola, and Klaus-Robert Müller. "Nonlinear component analysis as a kernel eigenvalue problem." Neural computation 10.5 (1998): 1299-1319.
Technical Quality: 3
Clarity: 2
Questions for Authors: My suggestion is to better emphasize the computational aspect and make the paper more suitable for a conference. For example,
- Move the main Algorithms to the main body as it gives a more visual impact, which is of higher importance for a conference paper
- Use tables to compare accuracy and complexity of different algorithms, such as baselines like Lanczos, Randomized SVD, etc.
- It would be helpful if the authors could show numerical results w.r.t. accuracy, complexity, or relevant quantities to support their theoretical analysis.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their useful feedback and for their suggestions to improve the readability of our manuscript. Below we provide replies to the specific questions that were raised:
1. **Question:** The assumption of $H$ being symmetric needs to be in the Abstract as well, as it is quite significant.
**Answer:** We will fix the missing mention in the abstract that $H$ must be Hermitian and $S$ Hermitian and positive-definite. Indeed, these are crucial assumptions that must be emphasized.
2. **Question:** Move the main Algorithms to the main body as it gives a more visual impact, which is of higher importance for a conference paper.
**Answer:** We will move the most relevant algorithms from the appendices into the main body of the manuscript, taking advantage of the additional page of content that is allowed in the final version.
3. **Question:** It would be helpful if the authors could show numerical results w.r.t. accuracy, complexity, or relevant quantities to support their theoretical analysis.
**Answer:** Due to the limited time, we well not be able to incorporate numerical results in our manuscript, but will provide indicative examples in the final version.
4. **Question:** Missing citation to kernel PCA [1] Schölkopf, Bernhard, Alexander Smola, and Klaus-Robert Müller. "Nonlinear component analysis as a kernel eigenvalue problem." Neural computation 10.5 (1998): 1299-1319.
**Answer:** We are aware of the paper by Schölkopf et al. and cited it twice in our manuscript (Ref. [116]). If other relevant references are missing, we would gladly add them.
We hope that our responses clarify the concerns raised by the reviewer. If (other) major concerns remain, we will address them to the best of our capabilities. Again, we would like to thank the reviewer for their useful recommendations.
---
Rebuttal Comment 1.1:
Comment: As the deadline of the reviewer-author discussion period is approaching, we were wondering if the reviewer had the chance to consider our responses, and if we can provide further clarifications for any remaining concerns.
---
Rebuttal Comment 1.2:
Comment: Thanks for the response, I suggest that the authors include the planned revisions in the final version. I will increase my score. | Summary: This paper considers the following optimization problem: given Hermitian $H$ and Hermitian, positive definite $S$, find matrices $C$ and $\Lambda$ such that $HC=SC\Lambda$ where $C$ is the eigenvectors and $\Lambda$ is the eigenvalues. The important application is when the eigenvectors of the interest form an invariant subspace, which covers applications such as DFT and PCA. So the goal is to compute a rank-$k$ projection onto the top or bottom-$k$ eigenvectors. The main contribution of this paper is a stable algorithm that computes an approximate projector $\tilde \Pi_k$ such that $\|\tilde \Pi_k - \Pi_k\|\leq \epsilon$ where $\Pi_k$ is the projector onto top/bottom-$k$ principal components. The algorithm runs in the current matrix multiplication time up to log factors in $n$, $\kappa(S)$, $1/\epsilon$ and the gap between $\lambda_k, \lambda_{k+1}$. It uses polylog bits of precision in the above parameters. It also provides error bounds for Cholesky beyond $O(n^3)$ classical algorithm. As corollaries, the algorithm leads to improvement for DFT, PCA and Block-PCA.
Strengths: This paper designs a novel algorithm for invariant subspace projection and PCA in the current matrix multiplication time. The algorithm is a good combination of existing techniques (such as approximating the sign function via inverse Newton) and new insights, such as reducing gap and midpoint of eigenvalues computation to eigenvalue threshold counting, and solve the counting via smoothed analysis.
The paper is well-written and results are presented clearly in the main body, some intuitions and proof sketches are provided for better understanding of the algorithms and analysis. Overall, I think this is a good paper with solid results.
Weaknesses: The only complaint I have with this paper (might be a bit unfair) is due to the sheer amount of contents and its very numerical linear algebra nature, this paper might be better suited for conferences such as ISSAC or journals such as SIAM journal on matrix analysis. Otherwise, I think this paper is definitely strong enough to be published in NeurIPS.
Technical Quality: 3
Clarity: 3
Questions for Authors: For your open question on rectangular FMM: is the current runtime bottleneck due to block square FMM? I didn't dig too deep into the algorithm details, but I would imagine the $n^\omega$ bottleneck is due to certain linear algebraic operations on $n\times n$ matrices, instead of a bunch of small rectangular FMMs?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Authors have discussed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their constructive feedback and for taking the time to review our manuscript. The reviewer is absolutely right about the rectangular FMM. Actually, our comment about rectangular FMM only applies to the analysis of the Block-Krylov method in Section 4.2 and in Appendix G, where we used blocked-square FMM to perform Square$\times$Rectangular matrix multiplication. Using Rectangular FMM instead of blocked-square FMM might provide minor (but not substantial) improvements for specific regimes of the input parameters.
If more information is needed, we would be pleased to further expand on the rectangular FMM or other topics.
---
Rebuttal Comment 1.1:
Comment: I thank authors for answering my questions on rectangular FMM. Just a small follow up question: what parameters of rectangular FMM are you looking for? Could you specify in terms of $\omega(a, b, c)$? Thanks!
---
Reply to Comment 1.1.1:
Comment: Good point, the aforementioned parameter would be $\epsilon$, which determines the block-sizes. $\omega(a,b,c)$ would indeed make it more general, or similarly $T_{MatMul}(A,B)$, to cover also the case where $A$ is sparse (though different mat-mul algorithms have different stability properties so it might need some care). It might indeed be an overkill to mention it as an open problem. We will adapt that part to our best capabilities in the final version. Thank you for the discussion! | Summary: The paper tackles the fundamental problem of GEP subspace approximation (with forward error approximation). The problem is very relevant to different areas of Machine learning. This paper improves the time complexity of this approximation from cubic in n to matrix multiplication time, which is a significant improvement.
I have read the first nine pages and Appendix B, and the idea seems reasonable, and the paper is well explained. However, given the length of the paper (which exceeds 50 pages), I have not been able to verify the correctness of the solution. Therefore, I will not comment on the strengths and weaknesses of the paper and give a low confidence score.
In my opinion, a more theoretical venue such as STOC/FOCS would be more appropriate for this paper (both because of the emphasis on theory and because the reviewers have significantly lower paper load). I wish the authors the best.
Strengths: -
Weaknesses: -
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could the authors please elaborate on the difference between their result and the forward error approximation in Theorem 5.2 of [1] (which is the [37] citation of the authors)? The authors of [1] write that their Theorem 5.2 is restrictive as it needs S to be invertible and H to have a simple eigenspectrum (no repeated eigenvalues). It seems that Theorem 1.1 by the authors also requires S to be invertible but can allow a non-simple spectrum.
2. Is a general dependence on the term gap_k necessary for analysis like this?
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their comments and carefully examining our manuscript as well as Appendix B. Our replies to their comments can be found below:
1. **Question:** Could the authors please elaborate on the difference between their result and the forward error approximation in Theorem 5.2 of [1] (which is the [37] citation of the authors)? The authors of [1] write that their Theorem 5.2 is restrictive as it needs $S$ to be invertible and $H$ to have a simple eigenspectrum (no repeated eigenvalues). It seems that Theorem 1.1 by the authors also requires $S$ to be invertible but can allow a non-simple spectrum.
**Answer:** This is indeed a very interesting point that deserves further investigations. We have been in close contact with the authors of [37] the past months. After discussing and comparing our results in detail, we concluded that our works complement each other. The major difference comes from the fact that our Theorem 1.1 is stated specifically for Hermitian-definite pencils: $H$ must be Hermitian, while $S$ must be Hermitian and positive-definite, which is stricter than just invertible. However, we do not need simplicity in the spectrum because of Weyl's inequality (or the slightly more general Kahan's bound): For Hermitian matrices, a backward-error directly translates into a forward-error in the eigenvalues (see also our Lemma B.1 in Appendix B). This property does not hold for general matrices, as the ones that are assumed in Theorem 5.2 of [37]. In that case, one would resort to either the Bauer-Fike theorem, or to the properties of the pseudospectrum, in order to obtain forward-error bounds from the backward-error. If the reviewer is interested, we can also refer to the thesis of the last author of [37] for additional remarks.
2. **Question:** Is a general dependence on the term gap\_k necessary for analysis like this?
**Answer:** We have thoroughly discussed the $gap_k$ dependence in the bounds with other authors and experts in the field. It appears to be unavoidable. Intuitively, if we want to distinguish between two invariant subspaces, the error must be smaller than the gap. Consider the following arbitrary $2\times 2$ example:
$
A =
[
1+\epsilon, 0;
0, 1
].
$
The gap is equal to $\epsilon$, and the spectral projector on the top-1 invariant subspace is $[
1, 0; 0, 0].$
Now consider the following $\epsilon$-perturbation of $A$: $\widetilde A = [1, 0; 0, 1+\epsilon]$.
The spectral projector of the top-1 invariant subspace is completely different than before. In fact, it is orthogonal to the first one.
We hope that these responses clarify the reviewer's concerns. Let us know if the answers are not sufficient so we can elaborate further.
---
Rebuttal Comment 1.1:
Title: Further clarifications.
Comment: Dear authors,
Thank you for the explanations. Please find a followup question below.
In the paper, you mentioned that obtaining forward approximation error is a significantly more challenging task than obtaining backward approximation error,
Yet, here you mention that if the matrices are hermitian (as in the case of H) backward approximation error can be directly translated to forward approximation error, which seems to contradict the hardness of obtaining forward approximation error. It would be great if you could help me understand this point.
Thanks a lot.
---
Reply to Comment 1.1.1:
Comment: This is a subtle point, but in this case the statements that appear to be contradicting refer to different quantities. As it is mentioned in our previous comment and in the paper, it is possible to obtain forward-errors for the **eigenvalues** from the backward-error using known inequalities, even for the non-Hermitian case (see e.g. Ref [24]). Obtaining forward-error bounds for invariant subspaces is not easy.
Consider again the $2\times 2$ example in the previous comment. The eigenvalues of $\widetilde A$ perfectly approximate those of $A$. But the invariant subspaces are completely different. Numerical analysis often refers to this fact as "eigenvector sensitivity" (see e.g. ref. [38]). The eigenvalues and the invariant subspaces of matrix pencils are even more sensitive than those of simple matrices, yet, our analysis still applies for Hermitian-definite pencils (Appendix C serves as the backbone).
We hope that this clarifies the question and we are pleased to elaborate more if required. If the reviewer thinks that these details are not clear in the article, we will modify it accordingly. In that case, it would be helpful to point us to specific pages/lines that need clarification. | Summary: The paper presents a novel approach to approximating invariant subspaces of generalized eigenvalue problems (GEPs), which are fundamental in many applications such as Principal Component Analysis (PCA) and Density Functional Theory (DFT). The authors introduce an algorithm that computes a spectral projector $ \Pi_k $ with a forward-error approximation guarantee in near matrix multiplication time $ O(n^{\omega + \eta}) $, where $ \omega $ is the matrix multiplication exponent. The approach advances a new analysis for Cholesky factorization and a smoothed analysis for computing spectral gaps, which are key innovations applied to obtain the desired bound with high probability.
The paper's technical claims are well-supported by rigorous mathematical proofs and thorough analysis. The use of Cholesky factorization and smoothed analysis for spectral gaps is innovative and effectively addresses the computational challenges. The proposed algorithm’s performance is theoretically grounded. However, some sections could benefit from additional explanations, empirical computations and examples to enhance understanding, especially for readers less familiar with the theoretical work developed in the paper, and potentially reaching a larger audience.
Strengths: - **Originality:** The paper introduces a novel approach to solving GEPs and PCA with nearly matrix multiplication time complexity, which is a significant advancement over classical methods. The novel analysis for Cholesky factorization and a smoothed analysis for computing spectral gaps could have a wider application too.
- **Quality:** The mathematical rigor and comprehensive analysis in terms of bit complexity ensure that the proposed methods are both theoretically sound and practically applicable.
- **Clarity:** The paper is generally well-written, with detailed explanations of the methods and thorough proofs.
- **Significance:** The results have broad implications for many applications in machine learning and scientific computing, providing a more efficient computational framework.
Weaknesses: - **Complexity of Implementation:** The proposed methods, while theoretically sound, may be complex to implement in practice. Detailed implementation guidelines for the algorithms 1, 2, 3, and 4. A standard implementation also would be of value, showing how some of the factors hidden in the $O(.)$ complexity could play a significant role in practice. For example, as it is, it remains open how, for example, the significance of the factor defining the bound on the approximation error $\epsilon$ (which appears inside a $\text{log} \text{log}$ expression) when working with varied sizes of matrices $n$ and approximation error $\epsilon$. Theorem 1 also should include some comments about this issue.
- **Hyperparameter Sensitivity:** The algorithm's performance depends on several parameters (e.g., spectral gap, condition number). A more detailed discussion on selecting these parameters and their impact on performance would be helpful, along with empirical implementations and discussion.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1) Parameter Selection: How should the hyperparameters (e.g., spectral gap, condition number) be chosen in practice? Are there heuristic methods or guidelines for selecting these parameters to ensure optimal performance?
2) Extensions to Sparse Matrices: How does the proposed method handle sparse matrices common in large-scale applications? Are there any modifications or optimizations specific to sparse matrix structures?
Confidence: 2
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The presented paper discusses in great depth the theoretical underpinning of the proposed algorithm, and we understand this already has a significant impact. Nevertheless, the lack of practical implementation, empirical validation, and analysis of some aspects of the algorithms hinders the potentially higher impact of the proposed method. Even a limited empirical section, including investigations into the role of the hyperparameters and covering varied structures of matrices, would highly improve the paper's potential impact and readership.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their positive feedback and for carefully reviewing our manuscript! Below we provide answers to the specific questions that were asked:
1. **Question:** Parameter Selection: How should the hyperparameters (e.g., spectral gap, condition number) be chosen in practice? Are there heuristic methods or guidelines for selecting these parameters to ensure optimal performance?
**Answer:** We would like to stress out that all required parameters (gaps, condition numbers, norms, etc) are determined by the algorithm, i.e., the user does not need to provide them. The \textbf{only} inputs are the matrices to be handled, their size $n$, and the desired accuracy $\epsilon\in(0,1)$. A reduced set of input parameters is an additional contribution compared to previous algorithms, where methods to determine the required parameters are typically not provided. In our case, the complexity of determining the parameters is included in the main theorems.
2. **Question:** Extensions to Sparse Matrices: How does the proposed method handle sparse matrices common in large-scale applications? Are there any modifications or optimizations specific to sparse matrix structures?
**Answer:** Extending these methods to sparse matrices is a very intriguing, but rather difficult open problem. To-date, for large-scale sparse problems, a Krylov-based method is probably the algorithm of choice. It takes advantage of matrix-vector products and exhibits a somewhat sparsity-time complexity. We thoroughly described the bit-complexity analysis of the celebrated Block-Krylov PCA method (see Section 4 and Ref. [100]). The problem with such methods is that $k$ (the target rank) must be very small to work well. As mentioned in Section 5 (iii), large-scale banded problems can probably be solved with so-called ``superfast eigensolvers'' (e.g., refs [61, 131]), but we are not aware of any end-to-end complexity/stability analysis of these algorithms. For general sparse matrices (non-banded) and large $k$ we believe that $O(n^\omega)$ is probably the best that can be achieved.
- **Question:** Complexity of Implementation.
**Answer:** We expect that the implementation of the proposed methods does not involve substantial difficulties (although it remains to be demonstrated). The main steps are the diagonal perturbations, the Newton iteration, and the bisection-like algorithm to compute the spectral gap. The level of difficulty should definitely not exceed that of, for example, Ref. [11,37] (for which open source implementations exist).
To conclude, we would like to thank again the reviewer for the detailed feedback and for their interest, they are both highly appreciated! Let us know if further clarifications of our replies are required. | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper studies the computational cost of invariant subspaces for generalized eigenvalue problems (GEPs) which is a fundamental computational problem with applications in machine learning and scientific computing. The authors propose a novel method that approximates the spectral projectors of GEPs, and give the first $\tilde{O}(n^{\omega})$ bit complexity result for forward-error approximation in the floating point model, improve upon the previous $\tilde{O}(n^3)$ result. Based on this result, the authors also give new matrix multiplication-type upper bounds for PCA problems in terms of bit complexity.
Strengths: 1. This paper is well-written with very solid theoretical results and proofs. The problem, the model, the results are stated clearly. The existing works and previous results are discussed thoroughly.
2. The idea of reducing the problem to approximating the average and the gap among two adjacent eigenvalues is interesting. The result of approximating these two quantities (Theorem 3.1) can have potential other applications.
3. This paper gives a new stability analysis of the Cholesky factorization under floating point model and improves upon the previous $\tilde{O}(n^3)$ result, which is of independent interest (especially in the TCS community).
Weaknesses: The main concern is that this paper may be too dense with technical details and rigorous proofs as a NeurIPS submission (instead of a TCS conference). It might be better if the authors consider presenting this paper in a way that is easier to follow (e.g., delay more proof details to appendix, while adding some examples of what the results look like for specific schemes of spectral decay, and more detailed discussion about the potential applications / examples of GEPs), so that this problem can be brought to more people in this community.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. In the statement of Theorem 1.1 and Proposition 2.1, is $||H|| \leq 1$ a proper assumption?
2. Also in Theorem 1.1, what if $gap_k$ is very small for the given $k$ (say, $gap_k = 2^{-n}$)?
2. How does the result of Theorem 3.1 compare with previous works (if any)? How hard is it to approximate $\mu_k$ and $gap_k$ for a given $k$, is there any previous work that also achieve the $\tilde{O}(n^{\omega})$ bit complexity result?
3. Can the authors provide a more detailed discussion of how the proposed algorithm can (or cannot) be applied to real world problems such as solving large scale PCA tasks?
Confidence: 2
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors have adequately discussed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their positive feedback. Below we elaborate on the specific questions that were raised:
1. **Question:**
In the statement of Theorem 1.1 and Proposition 2.1, is $||H||\leq 1$ a proper assumption?
**Answer:**
$||\cdot||\leq 1$ is typically a standard assumption in eigenvalue algorithms. For Theorem 1.1., in particular, $||H||$ can be approximated up to a constant factor in floating point using Lanczos, e.g., by applying Theorem 18 in the ArXiv version of Ref. [101]. We can then scale $H$ by the approximated norm. Similarly with $S^{-1}$, we can use the $\mathsf{SIGMAK}$ algorithm in Appendix E to compute the minimum singular value of $S$, and scale accordingly.
2. **Question:**
Also in Theorem 1.1, what if
$gap_k$ is very small for the given $k$ (say, $k=2^{-n}$)?
**Answer:** This is an excellent question that we discussed with other authors and experts in the field. A dependence on the gap should be unavoidable. Otherwise, it is not possible to distinguish between the two subspaces and obtain forward-error bounds (backward-error bounds are easier). If $gap=2^{-n}$, then the problem is rather ill-posed to begin with, but the algorithm will still terminate in polynomial time.
3. **Question:** How does the result of Theorem 3.1 compare with previous works (if any)? How hard is it to approximate $\mu_k$ and $gap_k$ for a given $k$, is there any previous work that also achieve the $\tilde O(n^\omega)$ bit complexity result?
**Answer:** For general $k$ we are not aware of any other $\widetilde O(n^\omega)$ algorithms, except for Banks et al. (Ref. [11]). We briefly described in Proposition B.2 how to use [11] to compute the gap in $\widetilde O(n^\omega)$. However, [11] leads to a slightly slower algorithm than our main $\mathsf{GAP}$ algorithm, which is diagonalization-free. We are also not aware of any lower bounds to compute a single spectral gap. We believe that it should be $\Omega(n^\omega)$-hard, but we have no formal proof for that.
4. **Question:** Can the authors provide a more detailed discussion of how the proposed algorithm can (or cannot) be applied to real world problems such as solving large scale PCA tasks?
**Answer:** When it comes to practical applications, the main benefit of the proposed ``matrix multiplication reduction'' is not the asymptotic complexity of the resulting algorithm, but rather the fact that we can exploit matrix multiplication in the computations. It is fully parallelizable and lends itself to cache-optimization, which can provide significant speedups (up to orders of magnitude) in practice. We also think that random perturbations (diagonal, Ginibre, ...) might actually become very useful in the near future, due to their eigenvalue repulsion properties, but this field is currently under-explored.
Thank you again for the feedback and for the interest! We are happy to discuss these points more in depth, if needed.
---
Rebuttal Comment 1.1:
Comment: I appreciate it that the authors have addressed all my concerns and questions. I have also read all other reviews as well as the corresponding comments by the authors. I think we all agree that this paper provides a strong result with a solid proof, while for a conference like NeurIPS, I would suggest the authors to find a way to present this paper so that it could be understood more easily (relatively speaking). Thanks, I would like to keep my rating.
---
Reply to Comment 1.1.1:
Comment: Thank you once more for the feedback and for the discussion, we will try to accommodate all the suggestions from all the reviewers accordingly in the final version. | null | null | null | null | null | null |
Generalized Protein Pocket Generation with Prior-Informed Flow Matching | Accept (spotlight) | Summary: In this paper, the authors proposed PocketFlow, a protein-ligand interaction prior-informed flow matching model for protein pocket generation. The flow matching for backbone frames, sidechain torsion angles, and residue/interaction types are appropriately defined. To enhance the structural validity and binding affinity of the generated pockets, the authors proposed to leverage affinity and geometry guidance in the sampling process. Extensive experiments show that PocketFlow provides a generalizable model for pocket generation covering various ligand modalities such as small molecules, nucleic acids, and peptides.
Strengths: 1. PocketFlow leverages domain knowledge including protein-ligand interactions and geometric constraints to enhance pocket generation quality, which is quite novel.
2. PocketFlow extends the pocket generation task to broad ligand modalities such as small molecules, RNA, and peptides. Experiments show that the prior guidance can effectively improve the generalization capability.
3. In experiments, the advantage of PocketFlow over state-of-the-art baselines such as RFDiffuisionAA is obvious, achieving an average improvement of 1.29 in Vina score and 0.05 in scRMSD in Table 1.
4. The description of the PocketFlow algorithm is quite clear, and a preliminary version of code is provided for reproducibility.
5. The authors conducted comprehensive ablation studies to show the contribution of affinity/geometry guidance and protein-ligand interaction learning.
Weaknesses: 1. The backbone model of PocketFlow is modified from existing work such as FrameDiff and is less novel.
2. In lines 252-253, the authors said PocketFlow is only pretrained on protein-small molecule datasets, i.e., CrossDocked and Binding MOAD. Is it possible to train PocketFlow on the combination of protein-small molecule/peptide/RNA datasets for better performance?
3. In experiments on protein-peptide and RNA, PocketFlow represents peptide/RNA ligands as molecules. Could the frame representation for the protein residues also applied to peptide/RNA ligands?
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see the Weaknesses.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The limitations and broader impacts are well discussed in Section 5.5 of the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive comments and appreciation!
**Comment 1**: The backbone model of PocketFlow is modified from existing work such as FrameDiff and is less novel.
**Response 1**: Thanks for the question! This paper is an application-driven paper and would be of great interest to the growing AI for Science community. To well model the protein-ligand complex, we appropriately define **multi-modal flow matching** process for different components, including SE(3) flow matching for the protein backbone, torsional flow matching for sidechain torsion angles, and categorical flow matching for residue Types and interaction types. The incorporation of additional domain constraints into the framework is also not straightforward. For example, we novelly **formulate the complicated geometrical constraints (see Appendix. B) into guidance terms** for flow matching. To **tackle the non-differentiability of residue type sampling**, we propose the Sidechain Ensemble technique for the interaction geometry calculation. We believe the above-mentioned techniques and practices will also inspire the machine learning research community.
**Comment 2**: In lines 252-253, the authors said PocketFlow is only pretrained on protein-small molecule datasets, i.e., CrossDocked and Binding MOAD. Is it possible to train PocketFlow on the combination of protein-small molecule/peptide/RNA datasets for better performance?
**Response 2**: Thanks for the detailed question! In Table 2, we explore whether the pretrained PocketFlow on the combination of CrossDocked and Binding MOAD can generalize to peptide and RNA-binding pocket design. In the rebuttal period, we further finetuned the pretrained model on protein-peptide/RNA datasets constructed following previous works [1-2]. The datasets are split based on peptide/RNA sequence similarity. In the following table, we report the latest results and observe that finetuning PocketFlow achieves better results across most of the metrics compared to the original ones in the submission. We will include the new experiments and discussions in our revised paper.
| Model | PPDBench | | | PDBBind RNA | | |
|---------------|-----------------------|-------------------|------------------|----------------------|-------------------|------------------|
| | AAR (↑) | scRMSD (↓) | ΔΔG (↓) | AAR (↑) | scRMSD (↓) | ΔΔG (↓) |
| Test set | - | 0.64 | - | - | 0.59 | - |
| dyMEAN | 26.29±1.05% | 0.71±0.05 | -0.23±0.04 | 25.90±1.22% | 0.71±0.04 | -0.18±0.03 |
| FAIR | 32.53±0.89% | 0.86±0.04 | 0.05±0.07 | 24.90±0.92% | 0.80±0.05 | 0.13±0.05 |
| RFDiffusionAA | *46.85±1.45%* | **0.65±0.06** | *-0.62±0.05* | *44.69±1.90%* | **0.65±0.03** | *-0.45±0.07* |
| PocketFlow | **48.54±1.37%** | *0.67±0.03* | **-1.15±0.07** | **46.31±1.22%** | *0.68±0.02* | **-0.90±0.04** |
- **Bold**: Best results
- *Italic*: Second best
[1] Li J, Cheng C, Wu Z, et al. Full-Atom Peptide Design based on Multi-modal Flow Matching[J]. ICML, 2024.
[2] Nori D, Jin W. RNAFlow: RNA Structure & Sequence Design via Inverse Folding-Based Flow Matching[J]. ICML, 2024.
**Comment 3**: In experiments on protein-peptide and RNA, PocketFlow represents peptide/RNA ligands as molecules. Could the frame representation for the protein residues also applied to peptide/RNA ligands?
**Response 3**: Thanks for the insightful question! Yes, the peptide and RNA structures can also be modeled as frames and there are some recent reference papers such [1-3]. In PocketFlow, we represent peptide/RNA ligands as molecules for simplicity and explore the generalization capability of PocketFlow on other ligand domains. We will include the discussions of representation in our revised paper.
[3] Anand R, Joshi C K, Morehead A, et al. RNA-FrameFlow for de novo 3D RNA Backbone Design[C]//ICML 2024 Workshop on Structured Probabilistic Inference {\&} Generative Modeling.
---
Rebuttal 2:
Title: Response to Author
Comment: Hi,
Thanks for the response, my concerns have been successfully addressed!
Regards, | Summary: This paper studies the task of generalized ligand-binding protein pocket generation. To tackle the challenges of existing works, the authors proposed PocketFlow, a generative model that incorporates protein-ligand interaction priors based on flow matching. PocketFlow explicitly learns the protein-ligand interactions during training and leverages multi-granularity guidance to generate high-quality pockets during sampling. Experiments show that PocketFlow is a generalized generative model across various ligand modalities, including small molecules, peptides, and RNA.
Strengths: 1. The paper is well-written and easy to follow. The illustration of Figure 1 is attractive. The technical details and the code are provided for better reproducibility.
2. PocketFlow represents the first few works to broaden the scope of pocket generation tasks to include various ligand modalities, such as small molecules, nucleic acids, and peptides.
3. PocketFlow effectively combines the latest flow-matching models with prior knowledge (affinity guidance and interaction geometry guidance) to generate protein pockets with enhanced affinity and validity.
4. The proposed algorithms overall sound valid to me. The tasks and evaluation metrics of peptide/RNA-binding pocket design are also well formulated.
5. Experiments show that PocketFlow achieves an average improvement of 1.29 in Vina score and 0.05 in scRMSD. The authors also leverage PoseCheck to evaluate the protein-ligand interactions and steric clashes, which makes the evaluation comprehensive.
Weaknesses: 1. What is the time cost of PocketFlow and baseline methods in generating protein pockets? Would the prior knowledge-based guidance bring an extra burden to pocket generation?
2. In line 222, the authors mentioned sidechain ensemble for the interaction geometry calculation, which is too concise. The authors are recommended to elaborate more on this in the main pages.
3. In Table 2, some metrics of PocketFlow are not the best and RFDiffusionAA has better results.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why didn’t the authors apply discrete flow matching methods for residue/interaction type prediction, e.g., [1]?
[1] Dirichlet flow matching with applications to dna sequence design, arXiv preprint arXiv:2402.05841, 2024
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discussed the limitations and broader impacts in Section 5.5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments and appreciation!
**Comment 1**: What is the time cost of PocketFlow and baseline methods in generating protein pockets? Would the prior knowledge-based guidance bring an extra burden to pocket generation?
**Response 1**: Thanks for the detailed question! In Figure 7 of the submitted paper, we compare the average generation time of different models. Firstly, we observed that PocketFlow is much more efficient than template-matching methods (DEPACT) and diffusion-based models (RFDiffuisonAA). The generation time is only larger than dyMEAN and FAIR which are based on equivariant translation. Considering the performance improvement brought by PocketFlow (1.29 in Vina Score and 0.05 in scRMSD), the additional overhead is acceptable. We also compared PocketFlow with its variants without guidance and observed that the guidance mechanisms are quite efficient and would not introduce much overhead. We will add more clear discussions in our revised paper.
**Comment 2**: In line 222, the authors mentioned sidechain ensemble for the interaction geometry calculation, which is too concise. The authors are recommended to elaborate more on this in the main pages.
**Response 2**: Thanks for the constructive suggestion! Due to the page limits, we include the detailed elaborations of sidechain ensemble technique in the appendix. PocketFlow takes the co-design scheme, where the residue type/side chain structure of the pocket is not determined during sampling. Directly sampling from the residue type distribution makes the model not differentiable. We propose to use the sidechain ensemble for the interaction geometry calculation, i.e., the weighted sum of geometric guidance with respect to residue types. We will put the detailed descriptions of sidechain ensemble in our final version.
**Comment 3**: In Table 2, some metrics of PocketFlow are not the best and RFDiffusionAA has better results
**Response 3**: Thanks for the detailed comments! We admit that RFDiffusionAA is indeed a strong baseline. In Table 2, we evaluate different methods on peptide/RNA-conditioned protein pocket generation to explore the generalization capability. The performance of PocketFlow is comparable to the state-of-the-art model RFDiffusionAA. Even though some results are not the best, PocketFlow achieves the second best. Such results are quite promising and show the potential of our multi-modality flow matching architecture and domain knowledge-based guidance.
During rebuttal, we also tried enlarging the pertaining dataset (response 2 to reviewer bfsA) or leveraging SOTA discrete flow matching method (response 4) to improve the performance and achieved promising results.
In the future, we will keep updating our model and improve its performance.
**Comment 4**: Why didn’t the authors apply discrete flow matching methods for residue/interaction type prediction, e.g., [1]?
[1] Dirichlet flow matching with applications to DNA sequence design, arXiv preprint arXiv:2402.05841, 2024
**Response 4**: Thanks for mentioning the latest discrete flow matching methods, which inspires us a lot. Such discrete flow matching models are perpendicular to our work and can be seamlessly integrated into PocketFlow. During rebuttal, we performed additional experiments and observed promising results of incorporating the discrete flow matching model. We will include the new results with discrete flow matching model in our revised version.
| Model | CrossDocked | | | Binding MOAD | | |
|---------------|-----------------------|-------------------|------------------|----------------------|-------------------|------------------|
| | AAR (↑) | scRMSD (↓) | Vina (↓) | AAR (↑) | scRMSD (↓) | Vina (↓) |
| Test set | - | 0.65 | -7.016 | - | 0.67 | -8.076 |
| DEPACT | 31.52±3.26% | 0.73±0.06 | -6.632±0.18 | 35.30±2.19% | 0.77±0.08 | -7.571±0.15 |
| dyMEAN | 38.71±2.16% | 0.79±0.09 | -6.855±0.06 | 41.22±1.40% | 0.80±0.12 | -7.675±0.09 |
| FAIR | 40.16±1.17% | 0.75±0.03 | -7.015±0.12 | 43.68±0.92% | 0.72±0.04 | -7.930±0.15 |
| RFDiffusionAA | 50.85±1.85% | 0.68±0.07 | -7.012±0.09 | 49.09±2.49% | 0.70±0.04 | -8.020±0.11 |
| PocketFlow | *52.19±1.34%* | *0.67±0.04* | *-8.236±0.16* | **54.30±1.70%** | *0.68±0.03* | **-9.370±0.24** |
| w/ discrete flow | **53.87±1.20%** | **0.66±0.05** | **-8.310±0.17** | *53.39±1.65%* | **0.65±0.03** | *-9.267±0.31* |
- **Bold**: Best results
- *Italic*: Second best
---
Rebuttal Comment 1.1:
Title: Response to author
Comment: Thanks for the authors response. After reading the rebuttal, my concerns are resolved and I decided to rise my score. | Summary: The paper explores methods for generating protein pockets given a ligand using a flow matching generative approach. Unlike previous methods, the proposed approach integrates additional constraints into the flow matching learning process to guide the search for relevant pockets. Two types of constraints are considered:
1. Affinity score prediction, which utilizes an oracle predictive model to guide the generation process.
2. Geometric constraints, which ensure that the distances between atoms involved in specific types of bonds remain below certain thresholds.
The authors examine the effectiveness of these constraints, incorporating both domain knowledge and transferred knowledge from an affinity prediction model, in the generation of protein pockets. They benchmark their approach against state-of-the-art methods that do not use such constraints. The results demonstrate that the new pockets generated have improved RMSD and Vina scores.
Strengths: The concept of incorporating additional constraints into Flow Matching is interesting. While conditional flow matching exists, determining the appropriate constraints for pocket generation requires substantial domain knowledge and engineering effort to be effective.
The results are promising, as the authors conducted numerous experiments and comparisons with state-of-the-art methods. It is evident that considerable time was invested in gathering results from these methods.
The paper is well-written and enjoyable to read, even for those with limited knowledge of pocket generation problems.
Weaknesses: I have a concern about the inclusion of the affinity oracle predictor, which was trained on separate datasets. The authors did not specify which dataset was used to train their predictor or address the potential for information leakage between the training dataset and the test set used for pocket finding. Since the predictor provides the flow matching with a prior on which pocket might be relevant for a given ligand, any leakage would give a clear advantage. I looked for more detailed information about the affinity predictor in the appendix but found none regarding the training dataset.
I also question the novelty of the work in terms of its technical contribution to the machine learning community. While conditional flow matching itself is not new, the novel aspect of this work is the incorporation of additional domain constraints into the framework. Therefore, it is unlikely to be of significant interest to the machine learning research community from a methodological development perspective.
Additionally, since the authors retrained the baseline models with additional data, it is important that they report how hyperparameters were selected and tuned for these baselines.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could you please address the potential leakage between the dataset used for training the affinity predictor and the datasets used for assessing the pocket generation?
2. Could you please clarify the impacts of HPO of the baselines and the proposed method in this work?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed relevant limitations of the work in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments! Our replies are listed below:
**Comment 1**: The authors did not specify which dataset was used to train their predictor or address the potential for information leakage between the training dataset and the test set used for pocket finding.
**Response 1**: In the original submission lines 739-744, we described the dataset used to train the predictors: To train the binding affinity predictor, we first annotate the data points in the corresponding training set: data points are annotated 1 if their affinity is higher than the average score of the dataset, otherwise 0.
**Therefore, no additional datasets are used and there is no data leakage risks**. In the revised paper, we will highlight the training dataset for the predictors and address the potential concerns.
**Comment 2**: I also question the novelty of the work in terms of its technical contribution to the machine learning community. While conditional flow matching itself is not new, the novel aspect of this work is the incorporation of additional domain constraints into the framework. Therefore, it is unlikely to be of significant interest to the machine learning research community from a methodological development perspective.
**Response 2**: Thanks for the comment! This paper is an application-driven paper and would firstly be of great interest to the growing AI for Science community. To well model the protein-ligand complex, we appropriately define **multi-modal flow matching process for different components**, including SE(3) flow matching for the protein backbone, torsional flow matching for sidechain torsion angles, and categorical flow matching for residue Types and interaction types. The incorporation of additional domain constraints into the framework is also not straightforward. For example, we novelly **formulate the complicated geometrical constraints (see Appendix. B) into guidance terms** for flow matching. To **tackle the non-differentiability of residue type sampling**, we propose the Sidechain Ensemble technique for the interaction geometry calculation. **We believe the above-mentioned techniques and practices will also inspire the machine learning research community.**
**Comment 3**: since the authors retrained the baseline models with additional data, it is important that they report how hyperparameters were selected and tuned for these baselines.
**Response 3**: In Appendix. G Baseline Implementation, we described the details of running baseline methods. DEPACT is a template-matching method. **We used the recommended hyperparameters such as the weights of scoring functions from the original paper.** RFDiffusionAA is the state-of-the-art diffusion model for generalized biomolecular modeling and generation. **We use the provided checkpoints and the recommended hyperparameter setting from the paper** because the training code and data are not available. dyMEAN and FAIR are end-to-end deep generative models for protein sequence-structure codesign. **We performed a grid search over the key hyperparameters such as the number of layers and iterations based on the performance of validation datasets (Vina score).** Finally, we set the hidden size as 128, the number of layers as 3, and the number of iterations for decoding as 3 for dyMEAN. For FAIR, The number of layers for the atom and residue-level encoder are 6 and 2, respectively. Ka and Kr are set as 24 and 8 respectively. The number of attention heads is set as 4; The hidden dimension d is set as 128.
We will describe the selection/tuning of hyperparameters of baseline more clearly in the revised paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response to my concern, I read the paragraph in the appendix:
"To train the binding affinity predictor, we first annotate the data points in the corresponding training
set: data points are annotated 1 if their affinity is higher than the average score of the dataset, otherwise 0."
Could you please clarify more on "higher than the average score of the dataset", the average score you mentioned in this sentence is calculated on the training data?
---
Reply to Comment 1.1.1:
Title: Thanks for the response!
Comment: Yes, the average score (Vina score) is calculated on the training data. We annotate the data point as 1 if its calculated Vina score is lower than the average (higher affinity); we annotate the data point as 0 if its calculated Vina score is higher than the average (lower affinity). Therefore, all the calculations are based on the training data and there are no data leakage risks. We will make the statements in the paper clearer in the revised version. Thanks for the comments!
Bests,
Authors | Summary: The paper proposed PocketFlow, a generative model for designing protein pockets that bind with ligands. It aims to overcome limitations in existing methods by incorporating protein-ligand interaction priors and utilizing flow matching. PocketFlow is designed to handle multiple ligand modalities and demonstrates superior performance on various benchmarks.
Strengths: - The paper provides detailed methodology and includes anonymous code to reproduce the results.
- The proposed method generalizes across various ligand modalities, including small molecules, peptides, and RNA.
- The model outperforms existing methods on multiple benchmarks, demonstrating significant improvements in Vina scores and scRMSD.
- The model explicitly models key protein-ligand interactions, enhancing binding stability and affinity.
Weaknesses: - The method only considers interactions between protein and ligand, potentially neglecting interactions between protein sidechains within the pocket region.
- For some residue types, there might be π instead of 2π symmetry in the sidechain structures, which the proposed method seems to simplify.
- The use of flow matching and multiple guidance mechanisms could result in higher computational costs compared to simpler models.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why were these four types of interactions (hydrogen bond, salt bridge, hydrophobic, π-π stacking) chosen? Are there other interactions existing in protein-ligand binding that should be considered?
2. Can the proposed method be extended to protein-ligand docking (fixing the pocket type and structure)?
3. What is the efficiency of calculating different interaction types?
4. Is there any metric to evaluate the correctness of the interactions generated?
5. The original IPA relies on the rotational equivariance of frame orientation to achieve model’s invariance. However, the proposed method additionally treats the ligand atom as a residue and uses an invariant identity matrix to represent its orientation. Will the proposed IPA still output invariant embeddings?
6. How are the guidance coefficients for different guidance mechanisms determined?
7. Is RFDiffusionAA trained on the same dataset as the proposed model?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive comments and appreciation!
**Comment 1**: The method only considers interactions between protein and ligand, potentially neglecting interactions between protein sidechains within the pocket region.
**Response 1**: Thanks for the insightful comment! In PocketFlow, we explicitly consider protein-ligand interactions as guidance terms because protein-ligand interactions mainly contribute to the protein-ligand binding affinity. They are also learned implicitly within our model architecture: the pairwise attentions capture inter-residue interactions and the supervision of the predicted sidechain torsion angles encourages forming valid and stable protein conformations.
We note that **PocketFlow has a flexible framework and can be generalized to model protein sidechain interactions with minor modifications**. For model simplicity and computational efficiency, we only consider the protein-ligand interactions in our current version of PocketFlow. We will include the above discussions in our revised paper.
**Comment 2**: For some residue types, there might be π instead of 2π symmetry in the sidechain structures, which the proposed method seems to simplify.
**Response 2**: Thanks for the constructive comments! Yes, we are aware some sidechain torsion angles are 180◦-rotation-symmetric, such that the predicted torsion angle χ and χ + π result in the same physical structure. Since only 4 of 39 possible sidechain torsion angles have such π symmetric properties, we did not consider them for simplicity in our preliminary version of PocketFlow. Following AlphaFold2 (supplementary 1.9.1), we can include such π symmetric constraints by providing the alternative ground truth torsion angle for training with minor modifications of the code. We will include the discussions and the implementation in our revised paper.
**Comment 3**: The use of flow matching and multiple guidance mechanisms could result in higher computational costs compared to simpler models.
**Response 3**: Thanks for the question! In Figure 7 of the submitted paper, we compare the average generation time of different models. Firstly, we observed that PocketFlow is much efficient than template-matching methods (DEPACT) and diffusion-based models (RFDiffuisonAA). The generation time is only larger than dyMEAN and FAIR. Considering the performance improvement brought by PocketFlow (1.29 in Vina Score and 0.05 in scRMSD), the additional overhead is acceptable. We also compare PocketFlow with its variants without guidance and observed that the guidance mechanisms are quite efficient and would not introduce much overhead.
**Comment 4**: Why were these four types of interactions chosen? Are there other interactions existing in protein-ligand binding that should be considered?
**Response 4**: In PocketFlow, the four types of interactions are chosen because they are the most frequently encountered and are crucial for strong binding stability and affinity. Previous works, e.g., KGDiff ([93] in the paper) also consider the four interactions and managed to improve binding affinity and generation quality. In Appendix. B, we described the details of the four dominant interactions. There are some other interactions in protein-ligand binding such as Van der Waals Forces and Metal Coordination. We did not consider them due to their weak contributions to binding affinity or low occurring frequency. Moreover, we only consider the four dominant interaction types for computation efficiency.
**Comment 5**: Can PocketFlow be extended to protein-ligand docking (fixing the pocket type and structure)?
**Response 5**: Yes, PocketFlow has a generalized and robust architecture that can be extended to protein-ligand docking by e.g., fixing the pocket type and structure. During the rebuttal period, we conducted preliminary experiments on adapting PocketFlow to the blind docking tasks. We conduct experiments on the PDBbind v2020 dataset and follow previous works such as DiffDock and FABind for the data preprocessing. For our testing phase, we utilized 363 complexes recorded after 2019. In the table below, we follow previous works and report the percentage of ligands RMSD below 2/5Å and the centroid distance below 2/5Å. We observe that even though PocketGen is not specially optimized for docking, it still achieves strong performance compared with state-of-the-art baselines.
| Methods | Ligand RMSD | | Centroid Distance | | |
|---------------|--------------------|-----------------|---------------------|-----------------|---------------------|
| | % Below 2Å | % Below 5Å | | % Below 2Å | % Below 5Å | |
| QVINA-W | 15.3 | 39.5 | | 25.7 | 39.5 | 49* |
| GNINA | 12.5 | 37.0 | | 20.4 | 37.9 | 146 |
| SMINA | 13.5 | 39.1 | | 29.9 | 41.7 | 146* |
| GLIDE | 19.6 | 32.2 | | 35.4 | 46.4 | 1405* |
| VINA | 10.3 | 27.3 | | 20.4 | 37.3 | 205* |
| EQUiBind | 3.4 | 43.8 | | 16.7 | 43.8 | **0.03** |
| TANKBind | 4.3 | 44.8 | | **44.0** | 70.8 | 0.87 |
| E3Bind | 4.5 | 34.3 | | 33.8 | 66.0 | 0.83 |
| DiffDock | **32.0** | 48.3 | | 33.8 | 62.6 | 20.83 |
| FABind | 19.4 | *64.0* | | 5.9 | **75.7** | *0.12* |
| PocketFlow | *30.2* | **65.7** | | *38.0* | *71.3* | 0.45 |
- **Bold**: Best results
- *Italic*: Second best
---
Rebuttal 2:
Title: Further Response to Reviewer gNH7
Comment: **Comment 6**: What is the efficiency of calculating different interaction types?
**Response 6**: Thanks for the question! In Figure 7 included in the Appendix, we compared the average generation time of PocketFlow and its variants such as PocketFlow without different guidance terms. We can observe that calculating the protein-ligand interaction for guidance will not bring much overhead (~28% of the total generation time).
To detect the different interaction types in the generated pockets, we leverage PLIP and posecheck. They are also efficient tools and it totally takes around 20 seconds to process 100 generated pockets.
**Comment 7**: Is there any metric to evaluate the correctness of the interactions generated?
**Response 7**: Thanks for the question! In PocketFlow, we used the protein-ligand interaction profiler (PLIP) [69] to detect and annotate the protein-ligand interactions for each residue by analyzing their binding structure (Table 3 and lines 304-314 of the paper). Generally, PLIP is based on physical/chemical rules and employs four steps including structure preparation, functional characterization, rule-based matching, and filtering to detect the generated interactions. In the PLIP paper [69], the authors compared the detected/ground truth interactions of 30 literature-validated examples and achieved good consistency. Therefore, we can regard the detected interactions by PLIP as correct.
**Comment 8**: The original IPA relies on the rotational equivariance of frame orientation to achieve model’s invariance. However, the proposed method additionally treats the ligand atom as a residue and uses an invariant identity matrix to represent its orientation. Will the proposed IPA still output invariant embeddings?
**Response 8**: Thanks for the insightful question! We agree that IPA relies on the rotational equivariance of frame orientation to achieve the model’s invariance. In our case, there is no canonical orientation for the ligand atoms and we set them as an identity matrix for simplicity.
To achieve invariant embeddings, we can initialize the protein scaffold by aligning the protein principal axes with the coordinate axes. This can be achieved by subtracting the center of mass (COM), computing the inertia matrix, diagonalizing the inertia matrix, and aligning the protein to principal axes. In experiments below, we compared such an initialization strategy with the default setting (only subtracting COM). The experimental results are comparable and show no significant differences. Therefore, we use the default setting for simplicity. We will include more discussions in our revised paper.
| Model | CrossDocked | | | Binding MOAD | | |
|---------------|-----------------------|-------------------|------------------|----------------------|-------------------|------------------|
| | AAR (↑) | scRMSD (↓) | Vina (↓) | AAR (↑) | scRMSD (↓) | Vina (↓) |
| PocketFlow | 52.19±1.34% | 0.67±0.04 | -8.236±0.16 | 54.30±1.70% | 0.68±0.03 | -9.370±0.24 |
| w/ principal axes aligning | 52.12±1.29% | 0.67±0.03 | -8.227±0.18 | 54.47±1.73% | 0.69±0.03 | -9.362±0.28 |
**Comment 9**: How are the guidance coefficients for different guidance mechanisms determined?
**Response 9**: In the default setting, the guidance coefficients of PocketFlow including $\gamma, \xi_1, \xi_2,$ and $\xi_3$ are set as 1 and achieve good results. We also explore the influence of guidance coefficients in the appendix. For example, in Figure. 8 of the submitted paper, we explore the impact of Affinity Guidance Strength ($\gamma$) on various generation metrics. As $\gamma$ is scaled up, the Vina Score significantly improves and quickly stabilizes; AAR initially increases before gradually decreasing; scRMSD, on the other hand, increases with higher $\gamma$. These observations underscore the importance of selecting an appropriate $\gamma$ to effectively balance the guidance and unconditional terms. While Affinity Guidance promotes the generation of high-affinity pockets, an excessively high $\gamma$ can result in less valid pocket sequences or structures. In the default configuration, $\gamma$ is set to 1 for simplicity.
**Comment 10**: Is RFDiffusionAA trained on the same dataset as the proposed model?
**Response 10**: As indicated in lines 822-823, we used the provided checkpoints of RFDiffusionAA for all the experiments because the training code is not available (https://github.com/baker-laboratory/rf_diffusion_all_atom). The original training data of RFDiffusionAA contains 121,800 protein-small molecule structures, 112,546 protein-metal complexes, and 12,689 structures with covalently modified amino acids, which represent a broad set of protein-ligand complex structures. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Stochastic Optimal Control for Diffusion Bridges in Function Spaces | Accept (poster) | Summary: The paper uses a stochastic optimal control to derive Doob's h-transform in infinite dimensions, and it shows the relation between solving the optimal control problem and learning diffusion generative models. The approach applies both to bridge sampling and for generative modelling. The approach is demonstrated on infinite dimensional problems, including bridges between images and bridges between probability distributions.
Strengths: Strengths:
- well-written and interesting paper
- the stochastic optimal control approach to deriving Doob's h-transform is well-founded and interesting
- the authors derive a Bayesian inference algorithm using a reference measure
- the method is tested on simple examples
Weaknesses: Weaknesses:
- Doob's h-transform in the infinite dimensional setting has been derived using other methods in previous papers (both in the linear and non-linear cases, e.g. ref [2,47], https://arxiv.org/abs/math/0610386). I believe the list of contributions and the introduction does not clearly show that the current paper is not the first to do this, e.g. [2] is first mentioned much later in the paper. I am not sure the introduction and list of contributions adequately reflects this, something that should be addressed before acceptance
Technical Quality: 3
Clarity: 3
Questions for Authors: no questions
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewer for their thorough evaluation of our work. We appreciate your recognition of the merits of our research.
-----
**1.Clarification on prior work**
- We agree that clarifying the relationship between previous papers on Doob's h-transform and our work is essential to avoid any unintended confusion. Our contribution involves leveraging stochastic optimal control to derive Doob's h-transform and extend finite-dimensional SOC problems based on conditional diffusion into infinite-dimensional spaces. In the revised manuscript, we will explicitly outline how our contribution compares to previous works, particularly [1].
-----
[1] Baker et al., "Conditioning non-linear and infinite-dimensional diffusion processes."
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. Assuming that the revised manuscript clearly describes your contribution in comparison to the existing literature as you write in the response and as the other reviewers also request, I keep my accept rating. | Summary: The authors investigate the notion of h-transform in infinite dimensional state spaces and provide a novel representation (Theorem 2.3) based on connections to stochastic optimal control.
The authors introduce two approaches to using this h transform derivation - firstly in something resembling bridge matching whereby both marginals are known and secondly by simulating the process with network parameterized h transform and taking gradients through the simulation.
The authors then apply this to image super resolution and Bayesian inference tasks in function space.
Strengths: - Derivations appear correct
- Although the h-transform has been described for infinite dimension through Hilbert spaces in the context of diffusion models in Baker et al 2024 (https://arxiv.org/pdf/2402.01434); as far as I am aware this connection to optimal control is novel.
- Experiments are reasonable compared to other infinite dimensional methods (see below) but still not on the same level of fixed dimension methods for e.g. superresolution.
- Spectral diffusion processes, Phillips et al 2023: https://arxiv.org/abs/2209.14125
- Neural Diffusion Processes, Dutordoir et al 2022, https://arxiv.org/abs/2206.03992
- Baker et al 2024 (https://arxiv.org/pdf/2402.01434)
Weaknesses: - Motivation for infinitedimensional diffusion bridge is not very strong and experiments are not very convincing.
I am not so familiar with the Bayesian inference experiments and what is SOTA. There are a few baselines missing as noted below. For the superresolution task there are stronger and simpler methods which have not been discussed. I think some stronger use-case in scientific applications would be needed for a higher score.
- More discussion with Baker et al 2024 (https://arxiv.org/pdf/2402.01434) would be appreciated
- As the authors note, the second training method for Bayesian learning problems (Alg 2) requires taking gradients through the simulated diffusion which can be slow, unstable and memory intensive. This goes against much of the diffusion model philosophy of splitting the generative problem into smaller problems through time and solving each jointly. I fear this will not be very scalable beyond 2D.
**Experiments**
- FID score or other quantitative metrics are note provided for superresolution tasks.
- There are no baseline or comparisions to other methods. There are many superresolution, and infinite dimensional diffusion methods.
- The authors compare to neural processes but there are more recent and comparable baselines for similar infinite dimensional / functional/ Bayesian experiments which do not rely on gradients through the simulated process, such as:
- Spectral diffusion processes, Phillips et al 2023: https://arxiv.org/abs/2209.14125
- Neural Diffusion Processes, Dutordoir et al 2022, https://arxiv.org/abs/2206.03992
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the recognition of our paper's strengths and extend our thanks to the reviewer for their comprehensive review and insightful comments. Below, we provide detailed responses to address each valuable comment.
-----
**1.Motivation, Comparison with baselines**
- We agree with the reviewer’s concerns regarding motivation and experiments. To address these, we have included a PDF file in the general response with additional experiments comparing unpaired image transfer methods with finite-dimensional baselines [1] and 1D functional generation with infinite-dimensional baselines [2, 3]. Additionally, we have further clarified our motivation. Please kindly refer to that section for more details.
-----
**2.Comparison with [4]**
- [4] primarily focused on developing the conditional diffusion process in function space. To achieve this, they defined a Doob’s h transform in infinite-dimensional space using Itô’s lemma and Girsanov’s theorem. This approach has its merits, such as enabling the conditioning of non-linear SDEs (while our work has considered the linear SDEs). However, simulating conditioned non-linear SDE often faces challenges because the conditional distribution of such SDE is generally intractable. Therefore, they require the approximation algorithm presented in [5].
- While our approach shares the idea of developing an infinite-dimensional Doob’s h transform, as the reviewer already mentioned, our primary goal is not merely to derive Doob’s h transform but to generalize various finite-dimensional sampling problems [6, 7] into the infinite-dimensional space by exploiting the theory of infinite-dimensional stochastic optimal control.
- In practice, the choice of linear SDEs to develop the relevant theory might appear to be a strict limitation for modeling complex distributions. However, similar to most recent diffusion-based models, the linear form may be sufficient for modeling. Moreover, this choice can be beneficial as it allows for more scalable algorithms due to the closed-form solution of the conditional distribution. In this light, by leveraging the theory of stochastic optimal control along with the choice of linear dynamical systems, our contribution also includes proposing tractable learning algorithms for real-world sampling problems.
-----
**3.Computational concerns**
- As the reviewer pointed out and as we stated in our paper, Algorithm 2 may induce computational difficulties. While it is possible to consider a more computationally favorable approach, such as implementing the adjoint solver [8] for memory efficiency or using the variance reduction technique proposed in [9], in the current work we focus on the theoretical property that the optimal control still yields Bayesian posterior sampling despite being defined on function space. Proposing a more scalable algorithm will be an interesting direction for future work.
-----
[1] Peluchetti, “Diffusion bridge mixture transports, Schrödinger bridge problems and generative modeling.”
[2] Phillips et al., “Spectral Diffusion Processes”
[3] Durordoir et al., “Neural Diffusion Processes”
[4] Baker et al., “Conditioning non-linear and infinite-dimensional diffusion processes”
[5] Heng et al., “Simulating Diffusion Bridges with Score Matching”
[6] Zhang et al., “Path Integral Sampler: A Stochastic Control Approach For Sampling”
[7] Shi et al., “Diffusion schrodinger bridge matching”
[8] Li et al., “Scalable Gradients for Stochastic Differential Equations”
[9] Xu et al., “Infinitely Deep Bayesian Neural Networks with Stochastic Differential Equations”
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I believe my review and scores are appropriate. | Summary: This article proposes a perspective on diffusion-based generative models based on stochastic optimal control, with objective functions based on the log density ratio between objectives.
Strengths: As far as I could evaluate, the mathematics are correct, and this particular mathematical perspective is new (to the best of my knowledge).
Weaknesses: I found this submission to have a weak presentation. It reads more like a stochastic calculus journal article than a machine learning conference submission.
This perspective is not clearly motivated: what is gained by considering an infinite dimensional perspective compared to the wide literature already approaching diffusion-based approaches through the length of stochastic optimal control? Since there are several elements that are infinite-dimensional in nature in this problem (distribution of random variables, score matching functions, etc), some early-on explanation and clarification of the approach considered here would be helpful.
Further, while a lot of the writing is centered around an infinite-dimensional perspective, this is then converted to a parametric model, with finitely many parameters. How much of the infinite-dimensional perspective is then lost? Is this important?
Technical Quality: 3
Clarity: 1
Questions for Authors: Thanks to the authors for addressing my comments during the rebuttal.
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: Presentation and motivation - adressed by authors during rebuttal.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We gratefully thank the reviewers for their valuable feedback and suggestions. Here, we address the concerns raised by the reviewer.
-----
**1.Early-on explanation and clarification**
- Following the reviewers’ suggestions, we have further clarified our motivation in the general response. Please kindly refer to that section for more details. We hope this provides the clarity you need.
-----
**2.Finite-dimensional Approximation**
- We would like to point out that the model having finite number of parameters does not mean that it should be modeling finite-dimensional models; for instance, Gaussian process regression, one of the most popular infinite-dimensional stochastic process models, effectively requires finite number of parameters to be estimated from the observed data. Usually, the function is evaluated only at a countable set of sampling points that are assumed to be generated from an infinite-dimensional stochastic process. In this case, we can approximate the infinite-dimensional function by fitting the models to the finite sampling points with finite-dimensional parameters.
- The “finite dimensional approximation” happening in our model would be in the part where we are approximating the covariance operator Q. Specifically, we approximate Q via the truncation, that is, choosing the finite number of eigenfunctions among infinitely many ones. This approximation may incur approximation error, but it does not alter the nature of our model as an infinite-dimensional model; To see this, note that our model can model image data in a resolution-agnostic way. This is possible because ours deals with the infinite-dimensional stochastic processes, so it can model a varying number of finite sampling points.
---
Rebuttal Comment 1.1:
Comment: Thank you for your comments and very helpful rebuttal - I have a better understanding now and have updgraded my score. | Summary: The paper presents stochastic control in function spaces with applications in diffusion bridges and Bayesian learning. Since the Lebesgue measure does not exist in infinite dimensional space, the authors derive Doob-h function with the Radon-Nikodym density with respect to a suitable Gaussian measure and conduct bridge matching experiments under this setup.
Strengths: Overall, the paper is well-motivated and well-written. The paper reviews the stochastic control in function space and the connection of Doob's h-transform with stochastic control and bridge matching in Section 2. It transits smoothly to Section 3, where it proposes an algorithm for diffusion bridges in function space, and an extension for Bayesian learning.
Weaknesses: As the theory exists for stochastic control in function space, and there is a recent work on h-transform [1] and generative model in infinite dimensional space, the novelty mainly lies in the application to bridge matching and Bayesian learning. These applications are interesting and important, however, the weaknesses are in the discussion on Bayesian learning, and the experiments on bridge matching. In particular, there should be a comparison with finite-dimensional bridge matching; several arguments in section 3.2 about Bayesian learning need more clarification (see questions for the details of this point).
[1] Baker, Elizabeth Louise, et al. "Conditioning non-linear and infinite-dimensional diffusion processes." arXiv preprint arXiv:2402.01434 (2024).
Technical Quality: 3
Clarity: 3
Questions for Authors: Comments and major questions:
1. As the paper introduced in section 2.1, one can instead consider the cylindrical Wiener processes on the Cameron-Martin space. What will break down in the current results? Will the cylindrical Wiener processes set-up bring convenience to the experiments as it is implemented in finite dimensions?
2. How do you arrive at equation(21)? What assumptions are required, and what are the regularity requirements for the energy function? It would also be good to remind the readers what mu_T is here.
3. In equation(25), how are energy function U and covariance operator Q determined in general? What are the particular choices used in the presented experiments?
4. What are the challenges of using time-dependent diffusion processes in the current method?
Minor points:
1. The paper repeatedly refers to Lemma 2.2, but the authors seem to be referring to Theorem 2.2.
2. The paper should include a proof of Theorem 2.3 for its completeness and rigor.
3. Why is it H_0 instead of H in Theorem 3.2?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your interest in our research and acknowledgment of its significant contributions. We are also grateful for the insightful questions raised by the reviewer, to which we have provided detailed responses in the subsequent text.
-----
**1.Comparison with finite-dimensional bridge matching**
- Thank you for your valuable suggestion to improve our work. In general response, we have included a PDF file containing additional experiments and a comparison with finite-dimensional method [1]. We will incorporate these results into the paper.
-----
**2.Cylindrical Wiener process**
- In theory, selecting a cylindrical Wiener process for infinite-dimensional SDEs can also be a viable option. Indeed, when prior knowledge of the data domain isn't available, opting for a cylindrical Wiener process, as noted by the reviewer, can facilitate the construction of a bridge model. However, the choice of Q determines the geometric structure of the Hilbert space where functional data resides. Therefore, If we model Q so that our Hilbert space contains characteristics of the target data, such as smoothness or curvature information, it can be beneficial.
- In practice, where our objective is to model complex data as a function, the choice of the covariance operator Q can significantly impact the ability of learned neural network operators (control in our case). For instance, in time-series imputation, our baselines [2, 3] differ only in their selection of Q (Technically “kernel” since they are finite-dimensional models) —[2] using a standard Wiener process and [3] employing an RBF kernel to construct the Wiener process— leading to performance improvements. Furthermore, in generative modeling, studies [4] empirically demonstrate that selecting an appropriate operator Q enhances the correctness of generation. Specifically, [4] highlights that opting for a cylindrical Wiener process may lead to issues such as mode collapse.
-----
**3.Regularity condition for U and choice of U and Q**
- Equation (21) is coming from a Bayes formula [5, Section 2] $\frac{d\pi_T}{d\mu_{prior}} \propto \exp(-\mathcal{U}(\mathbf{X}))$, where $\mu_{prior} = \mathcal{N}(m_{prior}, Q_{prior})$ and the potential $\mathcal{U}$ is a measurable mapping $\mathcal{H} \to \mathbb{R}$ for a given $\mathbf{X} \sim \mu_{prior}$. Hence $\mu_{prior}$ in equation (21), equation (25) and lines (245-246, 751) should be changed into. We apologize for the typo and for any confusion it may have caused in understanding the paper. The potential $\mathcal{U}$ typically chosen as a negative log-likelihood function (also referred to as the potential energy); in our case, we set it as a negative Gaussian log likelihood function. For Bayesian learning, we choose $\mathcal{A}=-\frac{1}{2}$ and covariance operator $Q$ as an RBF kernel. We have detailed the setting in Appendix A.9.2.
-----
**4.Time-dependent SDEs**
- The main challenges of using a time-dependent diffusion process in our method are proving the existence and uniqueness of the invariant measure. For an explicit form of the h-function, as stated in Theorem 2.3, we need to define a certain class of Gaussian measures, where the collection of time-dependent Gaussian measures is equivalent to its invariant measure over a long-term period. This class of Gaussian measures should be defined by a linear SDE.
-----
**5.Minor comments**
- We appreciate the reviewer pointing out the typo related to Lemma 2.2 and $\mathcal{H}_0$ in Theorem 3.2. It should be expressed with respect to the norm in $\mathcal{H}$. We will make the necessary corrections in the revised manuscript. Moreover, as reviewer suggested, we will include the proof of Theorem 2.3.
-----
[1] Peluchetti, “Diffusion bridge mixture transports, Schrödinger bridge problems and generative modeling.”
[2] Tashiro et al., “CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation”
[3] Bilos et al., “Modeling Temporal Data as Continuous Functions with Stochastic Process Diffusion”
[4] Hagemann et al., “Multilevel Diffusion: Infinite Dimensional Score-Based Diffusion Models for Image Generation“
[5] Hairer et al., "Signal Processing Problems on Function Space: Bayesian Formulation, Stochastic PDEs and Effective MCMC Methods"
---
Rebuttal Comment 1.1:
Comment: Thank you for the extensive responses and additional experiments! I expect the rebuttal/general response to be included in the final submission. | Rebuttal 1:
Rebuttal: We sincerely appreciate the time and effort the reviewers have dedicated to evaluating our paper. In response to their valuable and insightful feedback, we have provided some general responses that address comments common to all reviewers. **The attached PDF file includes relevant figures and tables for additional experiments.**
-----
**1.Comparison with baselines**
- In line with the reviewers' suggestions, we conduct additional experiments to demonstrate the applicability of our method to various real-world problems.
- First, for a comparison with recent infinite-dimensional baselines, we conduct an experiment on 1D function generation task. We evaluated our method against baselines on three datasets: Quadratic, Melbourne, and Gridwatch, following the setting provided in [1]. For generative modeling, we set the initial distribution as a centered Gaussian distribution with covariance operator $Q$ and the terminal distribution as the target data distribution and utilize the bridge matching algorithm in Alg 1. We used an RBF kernel for $Q$. For quantitative evaluation, we employed the power of a kernel two-sample hypothesis test which attempts to distinguish between the dataset from generated samples. Table 2 in the attached PDF file shows that our method is comparable to the baselines. Moreover, we provide a generated sample compared to the ground-truth for each dataset in Figure 1.
- Second, we compare our proposed model with a finite(fixed)-dimensional baseline. We conduct an experiment on unpaired image transfer between MNIST dataset and EMNIST dataset. We compare the performance of the [2] and our DBFS. For a fair comparison, we adhere to the iterative training scheme proposed by [2] where two forward control and two backward control models are learned alternately. We set $\sigma=1$ for all methods. For quantitative evaluation, we estimate the FID score between the generated data samples and real datasets. Table 1 in the attached PDF file shows that our method is comparable to the finite-dimensional method.
Furthermore, we provide additional generated samples at various unseen resolutions in Figure 2 to demonstrate the resolution-invariant property inherent in proposed infinite-dimensional models. We want to stress that our method may have slightly lower FID scores compared to finite-dimensional baselines. This may reflect observation in [3], where resolution-agnostic methods often have lower FID scores compared to resolution-specific methods. They argue that this is because resolution-specific methods can incorporate domain-specific design choices into their score networks (e.g., translation equivariance in CNNs for images). An interesting direction for future work would be to develop well-designed score operators for infinite-dimensional diffusion models.
-----
**2.Motivation**
- Traditionally, there has been interest in sampling from a probability measure on infinite-dimensional Hilbert spaces, particularly in Bayesian inverse problems [4]. Recently, the modeling of data as continuous functions has become increasingly popular within the machine learning community. This functional representation avoids the need for discretization, enabling the handling of data at arbitrary resolutions. Consequently parameterizing these functions with neural networks provides memory efficiency and the flexibility to represent various data forms [5]. For example, we can regard an image as a continuous function, where the function takes a 2-dimensional grid pixel location as input and outputs grayscale or RBF channels. Therefore, it is an infinite-dimensional object, as a continuous function can produce outputs for any 2-dimensional input defined on some domain.
- Since diffusion-based models are powerful inference tools for various tasks, researchers have been working to extend these models to handle functional data representation. To achieve this, they have generalized the framework of previous diffusion models by extending their formulation into infinite-dimensional Hilbert spaces, also known as function spaces [6, 7]. However, previous diffusion-based generative models typically focus on sampling from a target data distribution. This framework cannot easily address various sampling problems, such as distribution transfer or exact sampling from a posterior distribution (in functional form as in equation (21)).
- In the finite-dimensional case, these problems can be solved by exploiting the theory of stochastic optimal control (SOC) [8]. This motivates us to extend and generalize finite-dimensional SOC into the infinite-dimensional case to meet the demands of sampling problems from a functional perspective. In practice, by generalizing previous SOC-related problems into infinite-dimensional space, our model can naturally achieve resolution-free data transfer between any two image distributions, perform posterior sampling from a distribution over functions such as GP-posterior, and modeling irregular time series.
-----
[1] Phillips et al., “Spectral Diffusion Processes”
[2] Peluchetti, “Diffusion bridge mixture transports, Schrödinger bridge problems and generative modeling.”
[3] Zhuang et al., “Diffusion probabilistic fields”
[4] Hairer et al., “Signal Processing Problems on Function Space: Bayesian Formulation, Stochastic PDEs and Effective MCMC Methods”
[5] Dupont et al., “From data to functa: Your data point is a function and you can treat it like one”
[6] Franzese et al., “Continuous-Time Functional Diffusion Processes”
[7] Lim et al., “Score-based Generative Modeling through Stochastic Evolution Equations in Hilbert Space”
[8] Zhang et al., “Path Integral Sampler: A Stochastic Control Approach For Sampling”
Pdf: /pdf/644995e4b2a163fab2bc0c6069d9d683c52d0c43.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
No Representation, No Trust: Connecting Representation, Collapse, and Trust Issues in PPO | Accept (poster) | Summary: The paper investigates the phenomenon of feature collapse in the popular on-policy RL algorithm PPO. It demonstrates that increasing the sample reuse (number of epochs) in PPO deteriorates the feature rank and plasticity. It also finds that the clipping operation for the conservative update in PPO does not prevent this feature collapse. Furthermore, it shows that feature collapse leads to the degradation of the trust region, and vice versa. Finally, the paper proposes several techniques to prevent feature collapse, including feature regularization, network sharing, and Adam hyperparameter adjustments, though these techniques do not fully resolve the issue.
Strengths: - To the best of my knowledge, this is the first work to investigate the phenomenon of feature collapse in on-policy algorithms.
- The experimental design is strong. They appropriately select benchmarks: Atari for discrete action spaces and MuJoCo for continuous action spaces. Additionally, the experiments are extensive and well-conducted.
- The connection between feature collapse and trust region is intriguing and distinguishes this work from previous studies on feature collapse in value-based methods.
Weaknesses: - They demonstrated that training policy and value networks for more epochs leads to feature collapse, but this is not typical in PPO settings. It is uncommon to see PPO run for 6 or 8 epochs (Figure 1, 2, and 3).
- While they attempt to explain feature collapse in the critic networks, the explanation is insufficient and requires further elaboration (Section 3.1).
- While they emphasize that the focus of the paper is not on improving the performance of PPO, they do not provide a clear solution to mitigate feature collapse in PPO (Figure 6).
Technical Quality: 3
Clarity: 3
Questions for Authors: - In Section 3.1 and Figures 7, 8, and 9, there appears to be no clear correlation between the number of epochs and the plasticity loss of the critic network, which is counterintuitive since it contrasts with findings from value-based plasticity studies. Could you explain more about this?
- In Section 3.2, you state that the clipped sample’s ratio will have their probabilities continue to go beyond the clip limit in PPO. How about replacing the clipping operation with a KL penalty? This technique was proposed in the original PPO paper [1], and it has shown no significant difference in performance compared to the clipping operation in some environments [2]. What impact do you think replacing the clipping operation with a KL penalty would have on feature collapse?
- Regarding PFO, instead of applying regularization in the feature space to ensure the current representation does not deviate from previous representations, how about applying regularization in the weight space using EWMA update?
- It would be beneficial to investigate Phasic Policy Gradient (PPG) [2]. It trains the actor network to predict the value of the old policy, which is similar to PFO. It would be interesting to analyze feature collapse in PPG to see if similar patterns emerge as observed in PFO.
[1] John Schulman et al., Proximal Policy Optimization Algorithms, arXiv 2017 \
[2] Karl Cobbe et al., Phasic Policy Gradient, ICML 2020
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Q1.** … leads to feature collapse, but … it is uncommon to see PPO run for 6 or 8 epochs (Figure 1, 2, and 3).
**A1.** **The collapse phenomenon we show is certainly not exclusive to running with more epochs. The phenomenon does happen with the “standard” hyperparameters of the environments we study.**
This is the case on MuJoCo, and on ALE in Figures 1, 2, and 3, mentioned by the reviewer, we see the start of the collapse at 100M steps, which motivates us to increase the number of epochs to accelerate the collapse, and allows us to show that the phenomenon is due to non-stationarity and the inability of the policy to adapt to new targets.
In this sense, **increasing the number of epochs is a tool in our analysis and is not a condition for collapse.**
To observe collapse with the “standard” hyperparameters in Figures 1, 2, and 3 it suffices to train for longer. **We show this on ALE/Phoenix and ALE/NameThisGame with 200M steps in Figures 34 and 35 in the PDF of the rebuttal.**
>**Q2.** While they attempt to explain feature collapse in the critic, the explanation is insufficient
**A2.** **We do not attempt to explain the collapse of the critic network (see lines 147-149)** which has been studied in previous works, e.g., Kumar et al., 2022 through implicit regularization and Lyle et al., 2022 who argue that sparser targets are more detrimental.
**We make a few observations** about the representation dynamics of the critic network **in relation to the sparsity of the environment** (lines 178-184), which then allow us to draw conclusions about sharing an actor-critic trunk in section 4 (lines 339-355).
If the reviewer meant the actor network, then we do describe its features and dynamics and connect them to non-stationarity. By showing that this collapse also holds in on-policy policy optimization in addition to offline, off-policy, and value-based RL as shown by previous work, we strengthen the hypothesis of non-stationarity being a major cause of the phenomenon.
Ultimately, discovering deeper causes beyond non-stationarity remains an open question as discussed in our conclusion (389-391).
>**Q3.** While they emphasize that the focus of the paper is not on improving the performance of PPO, they do not provide a clear solution to mitigate feature collapse in PPO (Figure 6).
**A3.** In this work, we study interventions that have been shown in other settings to help and present their effects on the representation dynamics of PPO. This serves two purposes. 1) **showing which ones can help** and under which conditions (which environment features), and 2) when they do help and improve the trust region at the same time, **strengthen the relation we have drawn between representation collapse and trust-region collapse.**
We have shown that sharing the actor-critic network when the environment has a rich reward signal does mitigate collapse (Figure 6, top and middle), and applying PFO also mitigates collapse when sharing the actor-critic trunk does not (Figure 6, 19, 20).
However, like any other empirical study, and as mentioned in our limitations, there is no guarantee on how well our results would generalize to other settings. **We hope that our paper encourages further theoretical studies, and only then will we have provably or “clearly” mitigated the problem.**
>**Q4.** … no clear correlation between the number of epochs and the plasticity loss of the critic network … it contrasts with findings from value-based plasticity studies.
**A4.** **Runs with higher epochs tend to have higher plasticity loss which can be explained by a stronger overfitting to previous targets and is consistent with the findings from value-based methods.**
**There are some exceptions,** which have probably raised the reviewer’s attention, which can be explained by the behavior of the policy. When the policy collapses earlier in a run with a higher number of epochs, the critic's plasticity loss is prematurely stopped (there are no more changing objectives) and ends up lower than in a run with a lower number of epochs which has continued training until the end.
**This is the case of Figure 7 and is explained in its caption.**
>**Q5.** How about replacing the clipping operation with a KL penalty? …
**A5.** We have also performed early experiments with PPO-KL. However, the adaptive coefficient of the KL trust region makes the algorithm extremely high variance and often collapses due to the coefficient exploding or collapsing independently of feature dynamics.
Nevertheless, for the runs where the coefficient was stable, **we did indeed observe the same behavior as PPO-Clip**, specifically that the features became more aliased until they collapsed, followed by a trust region collapse characterized by a blow-up of the KL divergence.
>**Q6.** ...how about applying regularization in the weight space using EWMA update?
**A6.** PFO doesn’t stem from the sole motivation of regularizing the representations, it also seeks to do so by extending the trust region set by the clipping in the output space to an additional regularizer in the feature space.
The idea is to not impose an explicit constraint or a regularization on the weights directly to allow them to adapt to the moving targets but to regularize their dynamics through the features and output trust regions.
>**Q7.** It would be beneficial to investigate Phasic Policy Gradient (PPG) [2]
**A7.** This is a great suggestion. We would put PPG in the same box of algorithms that add an auxiliary representation loss to the policy network while not interfering too much with its objective, unlike sharing the actor-critic trunk, which we show can be detrimental when its rank is low to start with.
In the scope of this work, we did not investigate such algorithms, as a thorough analysis would require assessing a comprehensive sample of these algorithms such as PPG, InFer (Lyle et al., 2022), and CURL (Laskin et al., 2020). We reserve this study for future work.
---
Rebuttal 2:
Comment: Thank you for your responses to my questions. My answers to some of your responses are shown below.
> Q1. … leads to feature collapse, but … it is uncommon to see PPO run for 6 or 8 epochs (Figure 1, 2, and 3).
A1. Thank you for the detailed explanation! Increasing the number of epochs makes sense to me now.
> Q2. While they attempt to explain feature collapse in the critic, the explanation is insufficient.
A2. I would like you to elaborate on Lines 182-184. However, I understand that the main focus of this work is not on the plasticity of the critic network. Thanks!
> Q4. … no clear correlation between the number of epochs and the plasticity loss of the critic network … it contrasts with findings from value-based plasticity studies.
A4. I think I missed the caption. Thank you for letting me know.
> Q5. How about replacing the clipping operation with a KL penalty? …
A5. Thank you for clarifying my question! Including these results will strengthen your paper.
~All of my concerns have been well addressed, and therefore I am increasing the score from 5 to 6.~
Updated: After reviewing feedback from other reviewers, I found that similar studies had been conducted prior to this work. Therefore, I have decided to maintain the original score.
---
Rebuttal Comment 2.1:
Comment: Can the reviewer elaborate on their update about prior similar studies? Which studies does the reviewer refer to and to what extent are they similar to our work?
We clearly acknowledge building on previous work in value-based methods studying representation dynamics and reference previous works on PPO exhibiting issues with non-stationarity and performance but no connection to representation dynamics. To the best of our knowledge, our work is the first to draw connections between representation collapse, trust-region issues, and performance collapse. | Summary: The paper examines the loss of plasticity and its connection representation collapse in policy networks in online reinforcement learning (as opposed to previously studied value networks in offline reinforcement learning). The paper establishes the problem in the Atari game domain including the growth in the norm of feature pre-activations of the policy network. The analysis isolates and illustrates the problem of increasing representation state collinearity and representation collapse in the trust region of a policy in a toy domain. To mitigate representation collapse the paper proposes a feature regularization loss term based on these pre-activation features of the policy network.
The evaluation compares methods to mitigate policy network representation collapse including the new loss, sharing the actor and critic feature trunk, and modifying the Adam optimizer (by resetting moments or faster decay of second moments) in the ALE and MuJoCo environments. The results show the loss has does not impact episode returns but does mitigate loss of plasticity.
Strengths: # originality
Good
- Investigating plasticity loss in offline RL is established, but has not been extended to online RL.
- The analysis provides new insights on trust region optimization.
# quality
Good
- The toy domain isolates the effect of interest to strengthen the empirical results and motivate the new regularization term.
- the experiments show modest evidence for improvements and report variation in results over runs.
# clarity
Mostly good
- The exposition on trust region weaknesses and collapse could be clearer and make the main points sooner. The rest of the text was good.
# significance
Good
- Loss of plasticity is an important problem, particularly in the case of online RL.
- The paper will be of interest to researchers in continual learning and RL communities; both theoretical (for the trust region points) and practical (for the regularization term) researchers.
Weaknesses: The paper would benefit from outlining the trust region insight sooner as it is a core idea that only gets explained in detail on page 7 of the paper.
The results on the regularization term are a bit confusing and mixed. The narrative would benefit from clarifying the aspects that remain unresolved or ambiguous that merit further attention. The Gravitar analysis of sparse reward environments was helpful. But I was not really sure what to make of the effects on episode returns compared to representation collapse. Some way of teasing these effects out more cleanly would help.
Technical Quality: 3
Clarity: 3
Questions for Authors: - line 143: "do not include normalization layers"
- Does this help plasticity preservation? (even if only to cite prior work on the effects)
- lines 151-153: "We vary the number of epochs as a way to control the effect of non-stationarity, which gives the agent a more significant number of optimization steps per rollout while not changing the optimal target it can reach due to clipping, as opposed to changing the value of ε in the trust region for example."
- It might be interesting to contrast the effects of varying $\epsilon$ as well.
- line 280 (minor): $\pi_\theta(a_i | s)$
- $a_i$ should be $a_1$ in this case
- The toy setting was good demonstration of the problems with trust region constraints. It may help to introduce this sooner as I was confused about the trust region claims being made until that point.
- Figure 6 (minor)
- The caption only mentions "Top" and "Bottom", omitting the middle row of "NameThisGame".
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes. The paper acknowledges remaining opportunities to relate the findings on representation collapse to other research on plasticity loss. And to incorporate more complex model architectures (like memory).
There is no need for additional comments on potential societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Q1.** The toy setting was good demonstration of the problems with trust region constraints. It may help to introduce this sooner … The paper would benefit from outlining the trust region insight sooner as it is a core idea that only gets explained in detail on page 7 of the paper.
**A1.** We thank the reviewer for the recommendation. This is indeed a key point of interest to the readers. **We will update the introduction section to describe the insights sooner and point to the theoretical derivation for interested readers.**
>**Q2.** The results on the regularization term are a bit confusing and mixed. The narrative would benefit from clarifying the aspects that remain unresolved or ambiguous that merit further attention. The Gravitar analysis of sparse reward environments was helpful. But I was not really sure what to make of the effects on episode returns compared to representation collapse. Some way of teasing these effects out more cleanly would help.
**A2.** Figure 6 presents summaries of runs at the end of training aggregating runs with different numbers of epochs. It, therefore, shows the distribution of returns and representation metrics across different hyperparameters.
With this, when a method like PFO presents an improvement of the lower tail of episodic returns and, at the same time, an improvement of the representation metrics (no extremely poor ranks or fully dead networks), we can say that **the method improves robustness to representation collapse and performance collapse, and strengthens the link we’ve drawn between the two in the earlier sections. In this sense, the effects on episode returns and representation collapse should be appreciated in tandem.**
It should also be noted that the strong relation between them mainly holds around the collapsing regime, as described in Figure 4 and explained in our global rebuttal answer GA2.
>**Q3.** line 143: "do not include normalization layers". Does this help plasticity preservation? (even if only to cite prior work on the effects)
**A3.** Lyle et al. (2023) conducted a study with non-stationary supervised learning and value-based methods and found that normalization layers do provide improvements to plasticity. In the scope of this work, we did not look at interventions that change the “degree” of non-stationarity in the input or output of the network, such as layer normalization, batch normalization, and running observation or reward normalization, as these would require a different analysis. As stated in our discussion and limitation plan to address these in future work.
>**Q4.** lines 151-153: "We vary the number of epochs as a way to control the effect of non-stationarity, which gives the agent a more significant number of optimization steps per rollout while not changing the optimal target it can reach due to clipping, as opposed to changing the value of ε in the trust region for example." It might be interesting to contrast the effects of varying epsilon as well.
**A4.** As mentioned, varying epsilon would change the optimum of the surrogate objective so it’s not exactly clear if we can use this as a tool to investigate the relation between non-stationarity and feature collapse, like we did the number of epochs.
**Nevertheless, per the reviewer’s request, we rerun Figure 1 with ALE/Phoenix with different values of epsilon** and the baseline number of epochs of 4. As expected, and as shown in Figure 36 of the additional rebuttal PDF, **we observe no apparent correlation with the time of collapse as both doubling (epsilon =0.2) and halving (epsilon =0.05) the baseline epsilon (epsilon =0.1) can yield training curves with a delayed representation and performance collapse.**
>**Q5.** line 280 (minor): a_i should be a_1 in this case
**A5.** Indeed. We thank the reviewer for the detailed feedback!
>**Q6.** Figure 6 (minor). The caption only mentions "Top" and "Bottom", omitting the middle row of "NameThisGame".
**A6.** Top and middle share the same properties. We will add this to the caption.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my questions.
I do not see any substantial changes that would alter my score. After discussion with the other reviewers I am lowering my score in light of the weakness of the causal evidence for what drives collapse and the lack of strong results showing cases where PFO substantially improves over alternative methods.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the follow-up.
> “in light of the weakness of the causal evidence for what drives collapse”
Can the reviewer elaborate on this? Some reviewers have raised concerns about the scope of the cause evidence, which holds primarily around the collapse regime**, but it does not seem like concerns have been raised about the weakness of the evidence.**
If the reviewer refers to **the scope of the collapse, then we have addressed reviewer YzGo’s concern about the underwhelming scope in our reply to them.** Quoting:
“**Unfortunately, this is a fact.** We have extensively investigated the link in the early stages of the project to find strong evidence throughout training, **which to us was also the more exciting claim to prove; however, countless examples made us realize that the link did not necessarily hold throughout training but only around the collapsing regime.**
Using only positive examples or a narrow view of the correlations, **to claim that such a link is present throughout training would be extremely misleading and has been an issue in offline RL.**
**This does not mean that one should only be concerned about the link when the performance is starting to deteriorate. The representations don’t collapse all of a sudden, they deteriorate throughout training until they reach collapse. So mitigating representation degradation should happen throughout training and not only when around the collapsing regime.**”
> and the lack of strong results showing cases where PFO substantially improves over alternative methods.
**"PFO substantially improving over alternative methods" has never been a claim made in this work. This is a misunderstanding raised by reviewer qwmj, which was clarified in our reply to their review.** PFO is not meant to improve best run performance (lines 330-332), but to mitigate collapse, and to show that mitigating representation collapse mitigates performance collapse.
Quoting our reply to reviewer **qwmj:**
“the **significance of PFO is realized by a) successfully mitigating collapse** as observed in Figure 6 with a significantly higher median performance indicated by the black lines, and b) improving the representation metrics and obtaining a better trust region, therefore **strengthening the relation we’ve drawn between representation, trust region, and performance around collapse.**” | Summary: The paper addresses non-stationarity in RL and its impact on deep learning networks, focusing on PPO. It identifies that networks in PPO, like those in off-policy methods, suffer from representation rank deterioration, leading to performance collapse. The authors propose Proximal Feature Optimization (PFO), a new regularization on the policy representation that regularizes preactivation changes, which can mitigate performance collapse and improve the agent’s performance.
Strengths: - The work exhibits a coherent structure, and the author adeptly elucidates the underlying motivation. The author adeptly elucidates the shortcomings of PPO and offers enough reasons. This paper is well-written.
- Section 3's Q1 and Q2 effectively describe the potential problems of PPO, and I concur with the underlying motivation.
- The paper not only identifies problems but also suggests practical interventions, such as PFO and sharing the actor-critic trunk, to regularize representations and reduce non-stationarity, showing promising improvements in performance.
- The authors provide details of the experiment design, and it's clear.
Weaknesses: The authors claim to open data and code; however, I could not locate them. Therefore, I apologize if I overlooked their presence. Refer to the question section for other weaknesses.
Technical Quality: 3
Clarity: 4
Questions for Authors: - Please provide a more detailed explanation on Eq.2. I don't understand the meaning of $\phi$, and should the formula be $s_t$ instead of $S_t$?
- In section 4, the authors introduce an intervention that allows the actor and critic to share all the layers. The experiment results illustrate that this can bring some improvements but not significantly. I am more curious about whether the reason for this is the sparsity of reward or the environment itself. I think the latter is more likely, and we can verify this by setting up a control experiment with different reward functions in the same environment.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Q1.** The authors claim to open data and code; however, I could not locate them. Therefore, I apologize if I overlooked their presence.
**A1.** **Yes, these are available on an anonymous GitHub repo mentioned in Appendix line 578.** The GitHub repo contains further links to the Weights&Biases project with all the interactive training curves and links to download the raw logs.
>**Q2.** Please provide a more detailed explanation on Eq.2. I don't understand the meaning of phi, and should the formula be s_t instead of S_t?
**A2.** $\phi(S_t)$ refers to the vector of the preactivations of the penultimate layer of the network on the state $S_t$. As PFO can also take all the pre-activations in the network, $\phi(S_t)$ could also be the concatenation of the preactivations of all the layers.
We will add these details before the equation in the camera-ready version.
(This is analogous to $\Phi$ defined in the background (lines 100-104) as the feature matrix, consisting of the activations of the last hidden layer.)
$S_t$ refers to the state that we capitalize as it is a random variable in the expectation over trajectories, like all other random variables in the background (lines 58 - 66) This is the notation from Sutton and Barto (2018).
Therefore, the equation is essentially an L2 norm between the features of the same state at the old policy and the learned one: $\|\| \phi_\theta(S_t) - \phi_\text{old}(S_t)\|\|_2^2$.
>**Q3.** In section 4, the authors introduce an intervention that allows the actor and critic to share all the layers. The experiment results illustrate that this can bring some improvements but not significantly. I am more curious about whether the reason for this is the sparsity of reward or the environment itself. I think the latter is more likely, and we can verify this by setting up a control experiment with different reward functions in the same environment.
**A3.** This is a good observation! The representation of the critic plays a key role in the reasoning. When the critic, if trained separately, is subject to a degraded representation (or collapse) with a very low rank and many dead units, we observed that sharing its representation with the policy makes the policy collapse faster.
**When the environments are similar** – we consider two ALE environments, e.g., Phoenix and Gravitar, to be similar because of the observation space, etc – **the main difference in making a critic collapse or not is sparsity of the reward** as observed by Lyle et al. (2022). **Therefore in this case, it is fair to say that the reason for the collapse is mainly due to reward sparsity.**
Nevertheless, one needs to be careful when comparing environments when more than one degree of freedom changes; for example, this would make comparisons of experimental results obtained on Mujoco and ALE environments harder to attribute to sparsity only.
**Per the reviewer's request we ran an experiment on ALE/Phoenix (a dense reward environment), with a reward mask** randomly masking a reward with 90% chance, and compared the effects of sharing the actor-critic trunk. The results are in the additional PDF of the general rebuttal. **As expected, while with dense rewards, sharing the trunk was beneficial in ALE/Phoenix (Figure 21 Appendix), with the sparse reward, the opposite is true: sharing the trunk is detrimental (Figure 37 Rebuttal).** This confirms our conclusion. We thank the reviewer for suggesting the experiment, we will add it to the Appendix of the paper. We hope this strengthens the reviewer's support for the paper. | Summary: This work provides an empirical study of the feature rank deterioration and loss of plasticity of the Proximal Policy Optimization (PPO) algorithm on Atari and Mujoco tasks. Then links the deterioration of the performance to representation collapse and hence the break of the trust region. From there, the authors propose an auxiliary loss to PPO to maintain the representation rank.
Strengths: The study is very interesting and brings some new insight into how PPO works in practice. The experiments are conducted in clear logic and the analysis and observations are all novel and interesting.
Weaknesses: Figures can be plotted with better quality, don't overlap the labels (in Figure 3), and maybe set the titles to the quantity of interest.
I understand that several works are measuring the feature rank at the last policy/critic layer. But why that is sufficient evidence that the policy is losing rank? For instance, if my action space is binary, then the policy's last layer could learn to be rank-2. More information might be stored in previous layers. From another perspective, isn't it expected that the policy learns to lose rank, by compressing state information to only correlate with actions?
The proposed Proximal Feature Optimization objective is a bit confusing. First it should be a $\ell_2$ norm? What does it mean by $(\phi - \phi_\text{old})^2$. Secondly, from a high-level, isn't this just a regularization loss as in continual learning that prevents forgetting? But based on my knowledge it will exacerbate the the loss of plasticity. If you want to preserve rank, should not you add some reconstruction loss?
Technical Quality: 3
Clarity: 3
Questions for Authors: Could the authors address my questions in the weakness section?
From Figure 6, does it mean that the proposed method does not consistently improve the loss of plasticity?
If the main goal is to prevent the preactivation norm from going larger, why not just regularize it to be smaller?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Ues
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Q1.a** Figures can be plotted with better quality, don't overlap the labels (in Figure 3),
**A1.a** Indeed. We will correct this in the camera-ready version.
>**Q1.b** and maybe set the titles to the quantity of interest.
**A1b.** Can the reviewer please elaborate on this or give an example? What is meant by title and quantity of interest?
>**Q2.a** isn't it expected that the policy learns to lose rank, by compressing state information to only correlate with actions?
**A2.a** **Indeed, however, we distinguish between a low rank that’s beneficial and an extremely low rank that is detrimental. We have clarified this in answer GA1 in the global rebuttal.**
>**Q2.b** I understand that several works are measuring the feature rank at the last policy/critic layer. But why that is sufficient evidence that the policy is losing rank? For instance, if my action space is binary, then the policy's last layer could learn to be rank-2. More information might be stored in previous layers.
**A2.b** **The penultimate layer serves as a bottleneck** for decoding the actions or the value, so when this layer reaches an extremely low rank (as described in the previous answer, to distinguish for a beneficial low rank), this becomes irrecoverable. This also strongly correlates with plasticity loss, as this layer serves as a bottleneck for reconstructing information from the input state.
>**Q3.a** The proposed Proximal Feature Optimization objective is a bit confusing. First it should be a l2 norm? What does it mean by (phi(s) - phi_old(s))^2.
**A3.a** Yes, we thank for pointing this out. $\phi(s)$ is a vector containing the pre-activations.
The square of the difference should be more clearly written as the squared norm of the difference between the vectors: $\|\| \phi_\theta(S_t) - \phi_\text{old}(S_t)\|\|_2^2$.
>**Q3.b** Secondly, from a high-level, isn't this just a regularization loss as in continual learning that prevents forgetting? But based on my knowledge it will exacerbate the the loss of plasticity. If you want to preserve rank, should not you add some reconstruction loss?
**A3.b** **PFO regularizes the change in features at every batch, unlike methods that prevent forgetting, which regularize the parameters towards a fixed optimum of a previous task.**
In this sense, PFO is sought to be an extension of the clipping trust region applied in the output space to a trust region in the feature space. It comes from our observation that the clipping is not enough to satisfy the trust region.
It also allows us to address the undesired symptoms we observed (high feature norm and aliased features).
Other auxiliary losses, such as a reconstruction loss, can also be beneficial in this case but do not necessarily constrain or regularize the step size.
Finally, to support our claim that regularizing representations would prevent trust region and performance collapse, we found that using a regularization that directly targets one of the undesired symptoms we observed would make the connection more straightforward.
>**Q4.** From Figure 6, does it mean that the proposed method does not consistently improve the loss of plasticity?
**A4.** **It does consistently improve the plasticity loss.** This can not show in the the figure when the non-regularized model collapses too early. This is described in the caption of Figure 18: "the tails of the plasticity loss on Phoenix with interventions can be higher than without interventions on the runs where the models collapse too early without interventions, leading to the plasticity loss of the non-collapsed models with interventions eventually becoming higher."
>**Q5.** If the main goal is to prevent the preactivation norm from going larger, why not just regularize it to be smaller?
**A5.** The increasing preactivation norm phenomenon can be seen as an effect of a bigger issue: the failure of the clipping trust region imposed on the output to ensure a trust region on the rest of the network. Therefore, as mentioned in A3.b, PFO is sought to extend this trust region. Analogously to constraining ratios between the previous and current policy, it regularizes the difference in features between the previous and current policy.
**This allows PFO to address both the feature norm and the trust region issue. This is described in lines (319-322), which we will rephrase to highlight this point more.**
---
Rebuttal Comment 1.1:
Title: Reply
Comment: I thank the authors for their rebuttal and clarification. By "setting the title to the quantity of interest" I just meant that you could directly set an informative title for the figures. I will maintain my scores.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their clarification. We will add this to our final version. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thorough and insightful reviews. We are glad that the reviewers appreciate the novelty and impact of assessing loss of plasticity in on-policy optimization and its connection to PPO’s trust region and acknowledge our thorough experimental setup.
We have **addressed all the concerns and questions of the reviewers** in the individual rebuttals.
We hope that this helps the reviewers **increase their support for the paper otherwise, kindly ask the reviewers to point out any remaining issues.**
In addition, we use this section of the rebuttal to group and address **some common questions and concerns raised by reviewers. We reference those in our individual answers to the reviewers who raised them.**
**GQ1. The distinction between low-rank and poor representations**
>**YzGO Q3.** I would be careful about mentioning that low feature rank necessarily means that the representation is poor. Successful training of deep neural networks often involves a reduction in feature rank and the relationship between rank and performance can be complex Gulcehre et al. (2022).
**XWGd Q2a.** isn't it expected that the policy learns to lose rank, by compressing state information to only correlate with actions?
**GA1.** Indeed, we reference the work of Gulcehre et al. (2022) in our work and highlight in lines (236-238) that the relation we draw between the representation dynamics, the trust region, and the performance **primarily holds around the poor representation regime which we characterize by an extremely low rank, and not necessarily throughout training.** (lines 236-238: “We observe no significant correlation in the regions where the representation is rich … but an apparent decrease of the average probability ratios below $1 − \epsilon$ is observed as the representation reaches poor values”).
It may **not be straightforward to draw a line between low-rank representations beneficial for generalization and extremely low-rank representations causing aliasing**, as also acknowledged by Gulcehre et al. (2022) at least in offline value-based RL (“Unfortunately, reasoning about what it means for the rank to be too low is hard in general”), but for environments like Atari, **our figures seem to draw the line at single-digit ranks, which can be related to the action space of dimension 8+.**
We will further clarify this distinction in the same paragraph (lines 236-238) and in the captions.
**GQ2. The scope of the connection between representations and the trust region, and the significance of this scope.**
>**YzGO Q3.** I am also concerned about the link between feature rank and trust-region violation, a central point of the paper. In Fig.4, the correlation between the prob ratios and dead neurons or prob ratios and feature rank seem weak. There is only a substantial dip in the prob ratios once the feature rank is near zero or the number of dead units is very high. In other words, only when the agent is doing very poorly can we see a strong relationship, which may limit the applicability of the finding. Ideally, one would avoid this regime since performance is poor to start with.
> **Qwmj Q2** ... This cripples the significance of PFO and also the potential causal connection …
**GA2.** As mentioned in the previous answer we claim that the causal connection we draw between the representation dynamics, the trust region, and the performance primarily holds around the collapse regime and not necessarily throughout training.
**Yet discovering and describing such a relation only around collapse is nontrivial.** First, this gives **evidence that this regime is often attained and should be avoided.** Second, it **gives important insights into the failure mode of the popular PPO algorithm**, whose trust region is highly dependent on the representation quality, and **more generally about current trust-region methods, which only constrain the output probabilities.**
The discovery of this link can further drive research on training deep networks in non-stationary settings and influence the design of future trust-region methods (e.g., PFO forms a trust region in the representation space as well).
Pdf: /pdf/4dfa08d8fa88e1f6d28bb29792afbbacd2e67e94.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper investigates the phenomenon of loss of trainability and performance collapse with PPO agents in standard RL benchmarks. The same phenonema from other settings is found to hold with PPO agents and, moreover, increasing the number of training epochs exacerbates the effect.
Further investigation finds that PPO's trust-region constraint is not maintained when the feature rank decreases and a mechanism for this finding is explained.
The authors propose a regularizer, PFO, which softly constrains the change in features is shown to mitigate trainability loss throughout the learning process.
Strengths: The paper investigates the topic of trainability loss in a less-explored context, on-policy policy optimization, and focusing on a popular algorithm, PPO. Better understanding this algorithm and how it interacts with neural networks could lead to widespread impact as the research direction is developed.
A wide variety of experiments are conducted with some interesting findings. To me, the most intriguing insight is the connection between low feature ranks and increased violations of PPO's trust region. The investigation is fairly thorough, examining various metrics in tandem throughout the training process. Experiments seem to be conducted and reported in a reasonable fashion.
I appreciate the simple, clear example to demonstrate how a poor representation may lead to the clipping constraint being ineffective (line 270-...).
Weaknesses: Currently, the paper feels a bit disorganized and it is a bit difficult to follow the train of thought. There are many different experiments done and placing more focus on the most important could help streamline the paper. For example, the section at line 193 feels a bit out of place since this line of investigation is not mentioned earlier. Another example is that the research questions Q1. and Q2. do not mention the trust-region although it is a large part of the paper.
Additionally, various phrases (or claims) in the paper could use more support or evidence. These are discussed more specifically in the Questions section.
I am also concerned about the link between feature rank and trust-region violation, a central point of the paper.
In Fig.4, the correlation between the prob ratios and dead neurons or prob ratios and feature rank seem weak. There is only a substantial dip in the prob ratios once the feature rank is near zero or the number of dead units is very high. In other words, only when the agent is doing very poorly can we see a strong relationship, which may limit the applicability of the finding. Ideally, one would avoid this regime since performance is poor to start with.
Technical Quality: 2
Clarity: 3
Questions for Authors: I would be willing to revise my score based on answers to the following.
_Clarification questions_
- Line 308. Could you clarify how the average probability ratio is computed?
As written, my interpretation is that, given some window of updates, you consider all ratios above $1+\epsilon$ and take their mean. Then, you do the same for the ratios less than $1-\epsilon$. Finally you divide the first mean ratio by the second mean ratio. Is this correct?
If so, this seems potentially misleading because the means are computed conditionally, considering only ratios that are above 9or below) the clipping ratios. Then, we cannot tell what the mean ratio is overall, and whether there are many violations of the trust-region occurring. Moreover, ratios can be misleading since they may overemphasize small probabilities in the denominator.
I would suggest also reporting how often the clipping criteria is violated. E.g. compute on collected states which proportion of them have a ratio outside the clipping region after a round of updates on a batch of trajectories.
Perhaps it would also be useful to consider ratios less than 1 or greater than 1 seperately instead of computing a ratio.
- Line 164 Using the term "plasticity loss" as a metric is confusing because it also refers to the overall phenomena of neural networks being unable to learn additional tasks after training on other tasks.
Does this term refer to the capacity to fit random labels? In that case, I would use a term that is more descriptive e.g. random label trainability (RLT).
Also, how are these random labels generated in this context? In the original paper, a regression task was made since a value-based algorithm was used. With a policy-based algorithm, how is the plasticity evaluation task set up?
- Fig.1 Why does the preactivation norm plateau at 10^4?
- Which data is used to produce the box plots? The caption for Fig. 6 mentions "A boxplot includes 15 runs with the different epochs". Does this mean the box plot conains data across training times and epochs?
- In the box plots, how are outliers determined? We can see they lie outside the interquartile range indicated by the "whiskers" but what additional criteria is there?
- How is the PFO regularizer related to adding a regularizer in parameter space? Say $\ell_2$ regularization between the current parameters and the previous parameters? Basically, this seems similar in spirit to adding a target network to value-based methods.
- Line 94-99 (and elsewhere) "PPO introduces additional non-stationarity...": I do not think it would be fair to see PPO has more nonstationarity than other policy gradient methods. REINFORCE and PPO both have nonstationarity due to the changing policies. REINFORCE can also be interpreted as a trust-region-like method. Since REINFORCE is essentially gradient ascent on the expected return objective, we can view it as minimizing a linear approximation with a quadratic constraint (see paragraph containing eq. 5.5 in [4]). In this light, by doing a single optimization step before changing the surrogate objective, we could say REINFORCE has _more_ nonstationarity.
- Line 257. "The underlying assumption...approximately orthogonal..." Could you give some more support to this claim?
As mentioned above, with state aggregation, this clipping strategy could be effective without having orthogonal features.
- Line 8-9: "...policy optimization methods which are often thought capable of training indefinitely". I would disagree with this claim. Deep RL algorithms have often be thought to be high-variance and unstable so I do not think it would be common to believe they can train indefinitely.
I suggest replacing this phrase and simply mention the phenonmenon has been underexplored in the context of on-policy policy optimization.
- The parameter norm has been linked to plasticity/trainability loss in neural networks. Is there a reason why the paper measures feature norms instead?
_Broader questions_
- The performance degradation from training more epochs could be attributed to difficulties of offline RL. As the number of epochs is increased, the more the procedure ressembles offlne RL---a setting in which standard RL algorithms are not suitable for. What do you think of this?
- I find the proposed solution (Proximal Feature Optimization) to be slightly unsatisfying, Generally, I can see how using regularization on the policy and features can mitigate the performance collapse but would this be simply delaying the problem? If we train for longer, would we still expect to see the same performance degradation?
- Interestingly, the value network is not the problem. It is the policy network that suffers from representation collapse. Do you have any thoughts about this? I think this could be an interesting avenue of investigation.
_Suggestions_
- I would suggest removing or summarizing the paragraph at line 193 into a single sentence.
It is clear that the policy would be the same for all states if the features are constant. This would imply the policy is uniform. Currently the paragraph seems to explain this in a roundabout manner when a concise explanation suffices.
- A suggestion would be to try the Atari environemnts identified in [1] to benefit from plasticity injection. On many Atari games, little or no benefit was found, so it may be more meaningful to focus on those where there is more evidence for trainability loss.
- I would be careful about mentioning that low feature rank necessarily means that the representation is poor. Successful training of deep neural networks often involves a reduction in feature rank and the relationship between rank and performance can be complex [3].
- As a sidenote: even if there is a low dimensional representation, this does not necessarily imply the clipping criteria will be violated. For example, if we use state aggregation, then multiple states are clustered into one aggregate state. Then, the policy will be exactly the same for all these states, so if the policy exits the clipping region for one those states, there will no longer be any incentive to update the policy any further on any of them.
- A minor suggestion is to use stable rank [2] as a measure of the rank instead of effective rank since it is a continuous quantity in the singular values.
[1] "Deep Reinforcement Learning with Plasticity Injection" Nikishin et al.
[2] https://nickhar.wordpress.com/2012/02/29/lecture-15-low-rank-approximation-of-matrices/
[3] "An Empirical Study of Implicit Regularization in Deep Offline RL" Gulcehre et al.
[4] https://www.stat.cmu.edu/~ryantibs/convexopt-S15/scribes/05-grad-descent-scribed.pdf
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: These are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We address the key points in the rebuttal and the remaining ones in a comment.
>**Q1.** line 193 feels a bit out of place … investigation is not mentioned earlier … suggest removing or summarizing the paragraph …
**A1. This information is relevant to practitioners in diagnosing the kind of collapse they face.** One common collapse or stagnation is due to entropy collapse and lack of exploration. In this work, we highlight that when collapse is associated with a high entropy, it may likely be due to collapsed representations. **We agree with the suggestion of the reviewer and will summarize this paragraph in a sentence.**
> **Q2.** … research questions Q1. and Q2. do not mention the trust-region although it is a large part of the paper.
**A2.** Q1 and Q2 serve as motivation for section 3.1, where they are answered. These are not the only questions addressed in the work. The trust-region exposition in section 3.2 is only motivated after answering Q1 and Q2 and observing collapse.
We will move Q1 and Q2 inside section 3.1 to clarify their scope and reformulate section 3.2 with two questions, Q3 and Q4, to mirror 3.1 and ensure the question format covers all the main points of our work.
>**Q3.** I would be careful about mentioning that low feature rank necessarily means that the representation is poor … I am also concerned about the link between feature rank and trust-region violation …
**A3.** Reviewer XWGd has also raised this concern. We have clarified this in GA1 and GA2 of our global rebuttal.
> **Q4.a** how the average probability ratio is computed? …
**A4.a** Yes, this is correct, and more details can be found in the Appendix lines 635-640.
> **Q4.b** If so, this seems potentially misleading …
**A4.b** The mean ratio without conditioning does not give information about learning or the trust region. The PPO-Clip objective increases the ratios of actions with positive advantages and decreases those with negative advantages until they reach the clip limit. Therefore, to have a signal for learning, we have to at least condition on the sign of the advantage. Still, a takeaway from the toy example is that taking the mean ratio of actions with, say, a positive advantage under bad representations would mix the ratios that suffer from interference (Figure 5 right) and would give a misleading average.
Therefore, we found that the right way to quantify the trust-region violation is to condition on the violation itself (e.g., above $1+\epsilon$). We took the mean rather than the count because fully optimizing the PPO objective should push the actions until the clip limit, so it's not clear whether a high clip count would be worse than a lower one.
>**Q4.c** they may overemphasize small probabilities in the denominator.
**A4.c** This is true but has not been a critical issue for our analysis. In Figure 4 with ALE the smallest probability ratio is around 0.4, and in Figure 6, the largest excess ratios are between 1.5 and 2.5.
>**Q4.d** I would suggest also reporting how often the clipping criteria are violated …
**A4.d** Yes, we track this. As discussed above (A4.b) it’s not clear what the count of the clip fraction should be and it is not an interesting quantity for our study.
>**Q4.e** consider ratios less than 1 or greater than 1 seperately
**A4.e** We do separate them in Figure 4, where we look at the ratios below $1-\epsilon$ and isolate the causal relation around poor representations. For Figure 6, aggregating them into a ratio provides a better summary with larger plots.
>**Q8.** How is the PFO regularizer related to adding a regularizer in parameter space? …
**A8.** We can construct a spectrum ranging from regularizing the parameters to the outputs. The lower end regularizes the parameters like an L2 weight difference. The higher end regularizes the network's output like PPO clipping and almost like a target network in value-based methods. **PFO sits in the middle of this spectrum, where the regularization allows both the network's weights and final outputs to change without explicit constraints while maintaining a regularization of the feature space.**
Also, PFO should mostly be seen combined with PPO, to extend the trust region to the feature space.
>**Q12.** Is there a reason why the paper measures feature norms instead?
**A12.** Other work we cite, like Lyle et al. (2024), has also looked at feature norms in addition to weight norms. We noticed that the model weights were consistently and steadily increasing regardless of collapse; however, the feature norm showed a sudden jump around collapse. **We found this to be a more apparent symptom that researchers and practitioners would want to investigate further.**
>**Q13.** degradation from training more epochs could be attributed to difficulties of offline RL
**A13.** Off-policiness could be a good characterization here as the algorithm has access to the action probabilities of the collected data and the policy that collected it more generally.
In PPO the more epochs performed, the more the on-policy approximation becomes off-policy. However, the PPO-Clip objective is supposed to be robust to slight changes in the number of epochs, as it only depends on epsilon. Therefore, **we view observing a drastic collapse by moving from four to six epochs in Figure 1 not as a failure of standard RL algorithms in the off-policy setting but as a failure of the learning dynamics of PPO-Clip.**
> **Q14.** I find the proposed solution PFO to be slightly unsatisfying, …
**A14.** The first purpose of PFO in this work, as the reviewer mentions, is to show that explicitly regularizing the feature dynamics builds robustness to trust-region and performance collapse.
Now as with most empirical studies and regularizations using a coefficient, there is no guarantee on how long PFO will stand the test of time, but we hope that our paper encourages further theoretical studies and only then will we have provably mitigated the problem.
---
Rebuttal 2:
Title: Addressing the remaining points
Comment: >**Q5.** Using the term "plasticity loss" as a metric is confusing because it also refers to the overall phenomena …
**A5.** Yes, we use this metric to compute the loss from fitting a random initialization's trajectories. It is defined in the background (114-122) and to fit the actor we use a KL divergence.
In the paper, **we distinguish between the metric called plasticity loss and the phenomenon called loss of plasticity.**
This is the value of a loss, that’s why we call it plasticity loss. However, the phenomenon has also been termed plasticity loss by Lyle et al. (2024), so we agree that reusing it for the loss can be confusing.
We propose to state our distinction between the two forms in our background, making it less confusing. Otherwise, we can change the loss term to to random fit error.
>**Q6.** Fig.1 Why does the preactivation norm plateau at 10^4?
**A6.** Typically, the preactivation norm increases until all the neurons of the feature layer are dead (in the case of ReLU). After that, all the gradients for the weight matrices are 0, so the norm flattens out.
>**Q7.a** Which data is used to produce the box plots? …
**A7.a** Each run is summarized with an average over the last 5% of its progress. The 15 runs consist of 3 epoch values (4, 6, 8) x 5 seeds. This is presented in lines 307-308 and further detailed in the appendix lines 641 - 644, 658-659.
>**Q7.b** In the box plots, how are outliers determined? …
**A7.b** The whiskers are determined by the highest observed datapoint below Q3 + 1.5 IQR (similarly for the lower one) (default of matplotlib). The outliers are points outside of the whiskers.
We thank the reviewer for the question and will add this information to Appendix lines 659+.
>**Q9.** I do not think it would be fair to see PPO has more nonstationarity than other policy gradient methods…
**A9.** We agree that, from this perspective, REINFORCE could be considered as more nonstationary than PPO. **We will remove this "more nonstationary" observation; it is not critical to the work.**
The critical point of in work is that PPO performs multiple epochs on the same data and, in this sense, can be more prone to "overfitting" to previous experiences, which can become worse with more epochs.
>**Q10.** Line 257. "The underlying assumption...approximately orthogonal..." Could you give some more support to this claim? …
**A10.** The presumed orthogonality here is between the states where the policy should act differently, like with two different groups of aggregated states. Otherwise, for the states where the policy should be similar (same group of aggregated states), some alignment in the features and a reduction of dimensionality are indeed desired for generalization. However, with clipping, this alignment should be associated with the right relative feature norm; otherwise, the clipping will be violated, as observed in Figure 5, where a larger alpha would make the relative deviation of the action probabilities larger.
>**Q11.** Line 8-9: "...policy optimization methods which are often thought capable of training indefinitely"...
**A11.** This is indeed arguable. We believe that trust region methods aim to overcome this high variance and instability.
However, this is not a critical claim, and we can remove it. We thank the reviewer for the suggestion.
>**Q15.** Interestingly, the value network is not the problem. …
**A15.** **Yes, this is one key point of our work.** Policy networks also suffer from representation collapse independently of value networks, as noted in the caption of Figure 1.
Our intuition and the main motivation of this work is that, similarly to value networks, policy networks are also subject to the non-stationarity of their inputs and outputs (background, lines 88+), and optimizing deep neural networks under non-stationarity is known to cause issues.
>**Q16.** A suggestion would be to try the Atari environemnts identified in [1] to benefit from plasticity injection…
**A16.** Phoenix is one of the environments we have in common and where we see the most impactful results.
For this work, we did not want to select environments with prior knowledge of existing collapse in value-based methods to demonstrate the collapse in policy optimization; we rather used the unbiased sample recommended in Atari-5 (Aitchison et al. 2023) because **we did no want to unconsciously cherry-pick environments, where certain approaches may perform better.** Moreover, we would like to highlight that, on Atari, we could not find any large-scale study of on-policy RL algorithms and PPO demonstrating the collapse of the actor’s representation from which we could pick our environments at the time of submission.
>**Q17.** A minor suggestion is to use stable rank [2] as a measure of the rank …
**A17.** **We have tracked 5 different measures of the rank, spanning continuous or discrete and relative or absolute metrics.** We conducted a thorough comparison of these in Appendix E.
---
Rebuttal Comment 2.1:
Comment: I appreciate the clarifications and explanations.
After reading responses to other reviewers, I have to agree that certain points of Reviewer qwmj are still concerning, including the ones around the performance of the baseline. I am also still a bit underwhelmed by the fact that trust-region failures seems to only occur when the representation has already collapsed and the performance is poor.
I will keep my score at present.
---
Rebuttal 3:
Comment: We are happy that our clarifications addressed all your previous concerns.
> I have to agree that certain points of Reviewer qwmj are still concerning, including the ones around the performance of the baseline.
**We understand the concern; however, this is a matter of clarification.** We take the thoroughness of our implementation with utmost importance and **have clarified all the necessary bits in our reply to the reviewer,** which we summarize below:
1. Recall that **our results hold on Atari and we have already replicated them with the CleanRL** implementation the reviewer is using.
2. Recall that **our implementation for MuJoCo is based on recent implementations** for continuous action spaces influenced by seminal work studying implementation details in PPO and that **we find this setting more relevant to our audience.**
3. **Replicate our implementation in CleanRL, which is still exhibiting collapse,** so the reviewer can more easily inspect the code.
4. **Adapt the default CleanRL implementation the reviewer is using with a fully state-dependent action distribution and observe collapse** in the setting they are familiar with and interested in.
> I am also still a bit underwhelmed by the fact that trust-region failures seem to only occur when the representation has already collapsed and the performance is poor.
**Unfortunately, this is a fact.** We extensively investigated the link in the early stages of the project to find strong evidence throughout training, **which to us was also the more exciting claim to prove; however, countless examples made us realize that the link did not necessarily hold throughout training but only around the collapsing regime.**
Using only positive examples or a narrow view of the correlations, to **claim that such a link is present throughout training would be extremely misleading and has been an issue in offline RL.**
The work by Gulcehre et al. (2022) that the reviewer references is an excellent example of this. They conclude that previous exciting associations between performance and rank in offline RL are misleading and do not hold when considering a large experimental scope.
Our conclusion is that trust-region violations become more evident when representations are about to collapse. **This does not mean that one should only be concerned about the link when performance is starting to deteriorate.** **The representations don’t collapse all of a sudden; they deteriorate throughout training until they reach collapse. So, mitigating representation degradation should happen throughout training and not only when around the collapsing regime.**
We hope these clarifications address your concerns. Thank you for your input, and we look forward to your feedback. | Summary: This paper presents a series of experimental studies to diagnose the learning issues of PPO under non-stationarity in Atari-5 and MuJoCo tasks. Based on the results, this paper establishes a connection among feature rank/norm, plasticity loss and trust region violation and learning performance. To mitigate the issues in feature representation, this paper proposes Proximal Feature Optimization (PFO) and demonstrates its effectiveness in mitigating the learning issues under non-stationarity. Besides, the effects of sharing network parameters and adapting Adam are also evaluated in the same context.
Strengths: - Most existing works on plasticity loss or learning under non-stationarity focus on value-based RL or off-policy AC methods. In contrast, this paper focuses on on-policy algorithms, mainly PPO, which helps the RL community gain a better understanding of this problem.
- The experimental studies contains the results with multiple metrics, i.e., feature rank, feature norm, plasticity loss, dead neuron, excess ratio. These results will be a useful reference to the audiences.
- This paper reveals the connection among factors like feature rank, plasticity loss, excess rate, and learning performance.
Weaknesses: - The writing is almost clear. The empirical results and conclusions can be better organized and made prominent.
- The proposed method PFO is closely related to DR3 (Kumar et. al, 2022) and RD [1], both of which propose regularizing the feature representation. More discussions are necessary.
- Although PFO are demonstrated to mitigate the feature rank, plasticity loss, excess ratio in Section 4, it does not bring clear improvement in terms of learning performance like episode return (according to Figure 18 to Figure 27). This cripples the significance of PFO and also the potential causal connection between the learning issues investigated and the learning performance.
---
Reference:
[1] Reining Generalization in Offline Reinforcement Learning via Representation Distinction. NeurIPS 2023
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. The epsiode return curves in Figure 24 and Figure 26 (i.e., for MuJoCo Hopper) look strange. Is there any explanation to the collapse? A regular implementation of PPO should work in Hopper when the epoch num is 10.
2. The authors mentioned 4 MuJoCo tasks and 5 Atari games in the Experimental Setup paragraph. However, it seems only 3 Atari games (Phoenix, NameThisGame, Gravitar) and 2 MuJoCo tasks (i.e., Humanoid, Hopper) are used for the evaluation in Section 4. Are there more results on the remaining tasks/games?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The limitations are discussed in Section 5. However, one major limitation of this work is that the results in this paper do not provide sufficient support for the point that "mitigating the learning issues in feature rank, plasticity loss of PPO can improve the learning performance in terms of episode return".
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Q1.** … PFO is closely related to DR3 and RD …. More discussions are necessary.
**A1.** We thank the reviewer for pointing these out. Although **these regularizations emerge from value-based offline-RL challenges**, a discussion of their similarities with PFO can be valuable to our audience. We can add the following to our camera-ready version.
Other interventions regularizing feature representations have been studied in value-based offline RL. Kumar et al. (2022) propose DR3, which counteracts an implicit regularization in TD learningby minimizing the dot product between the features of the estimated and target states.
Ma et al. (2024) propose Representation Distinction (RD) which tries to avoid unwanted generalization by minimizing the dot product between the features of state-action pairs sampled from the learned policy and those sampled from the dataset or an OOD policy.
**Both are related to PFO as the methods directly tackle an undesired feature learning dynamic, but there is no motivation for DR3 or RD in online RL, and PFO is conceptually different.**
The implicit regularization that DR3 counteracts is not present in on-policy RL as shown by Kumar et al. (2022) in the SARSA experiment, and PFO differs from DR3 as it extends a trust region rather than counteracts an implicit bias. Whence the different implementations: PFO regularizes the state features between two consecutive policies, as opposed to consecutive ones of the same policy in DR3.
Similarly, the overestimation studied by Ma et al. (2024) in the vicious backup-generalization cycle is broken by on-policy data. PFO’s motivation to bring the trust region to the feature space resembles RD’s motivation to bring the overestimation constraint to the feature space. However, they are again different in their implementation as RD regularizes state features between the learned policy and the dataset policy or an OOD policy.
**Finally, PFO is applied to the actor, while both DR3 and RD are applied to the critic.**
>**Q2.** … do not provide sufficient support for the point that "mitigating the learning issues in feature rank, plasticity loss of PPO can improve the learning performance in terms of episode return".
>PFO … does not bring clear improvement in terms of learning performance like episode return ... This cripples the significance of PFO and also the potential causal connection …
**A2.** First, we claim that **the causal connection holds around the collapse regime**, not necessarily throughout training, and that discovering and **describing such a relation only around collapse is nontrivial.** We have clarified these arguments in GA1 and GA2 in the global rebuttal.
Second, we distinguish between consistently improving the best run performance (i.e. claiming to be “better” than PPO) and improving the aggregate performance across multiple runs and hyperparameters, e.g., robustness to collapse in our case. The limitation the reviewer raises is about the former, however this work argues about the latter.
In this sense, the **significance of PFO is realized by a) successfully mitigating collapse** as observed in Figure 6 with a significantly higher median performance indicated by the black lines, and b) improving the representation metrics and obtaining a better trust region, therefore **strengthening the relation we’ve drawn** between representation, trust region, and performance around collapse.
Ultimately, **this improvement can result in best performance improvement as well but less consistently as it requires running for long enough** to observe a collapse with the standard (tuned) hyperprameters. This is the case for NameThisGame in Figures 18 and 22, Humanoid in Figures 19 and 25, and Hopper in Figures 20 and 26, where PFO performs better than the baseline PPO. This is not a central claim of the paper, as it requires a larger computational budget to show for all environments.
**To strengthen these points, we have added Figure 34 to the rebuttal PDF** where we show that on ALE/Phoenix with the tuned standard hyperparameters, the agent collapses when training for longer and this is mitigated with PFO, which attains the best performance in that case.
>**Q3.** The epsiode return curves … for MuJoCo Hopper look strange. Is there any explanation to the collapse? A regular implementation of PPO should work in Hopper when the epoch num is 10.
**A3.** We use the same hyperparameters as the original PPO implementation, which are also the default ones in popular codebases (as noted in lines 140-143).
Our setting differs from a “regular” implementation in the following main points, which can help explain why a collapse is not typically observed in previous work.
1. We train for longer and collapse always happens after the default 1M steps. Dohare et al. (2023) also observe collapse when training for longer.
2. We do not decay the learning rate (line 154) we are interested in agents that can ideally train indefinitely.
3. we parameterize the action space by a TanhNormal distribution with both state-dependent mean and variance following the implementation of Haarnoja et al. (2018). (lines 614-616).
>**Q4.** … 4 MuJoCo tasks and 5 Atari games in the Experimental Setup ... However … only 3 Atari games … and 2 MuJoCo tasks … used in Section 4. Are there more results on the remaining tasks/games?
**A4**. Indeed, as noted in lines 305-306, while we used 4 MuJoCo tasks and 5 Atari games to demonstrate the collapse phenomenon with enough evidence, we did not use all of the environments for the evaluation.
We selected the environments where the collapse was the most consistent to test multiple interventions while maintaining a reasonable compute budget (this is common practice like in Gulcehre et al. 2023 and Kumar et al. 2021). We believe this to be a right tradeoff between claiming that interventions are necessary to prevent collapse and providing enough insights about several types of interventions.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: I appreciate the authors' careful response and the additional results.
> More discussion about PPO implementation
Thanks for the implementation details provided by the authors. According to Figure 8 (PPO at MuJoCo without intervention) and Figure 24 (PPO at MuJoCo-Hopper with intervention), the results show that (1) the PPO implementation used in this work exhibits collapse in HalfCheetah and Hopper with epoch=10 basically after 1M steps (Figure 8), and the collapse remains when different kinds of intervention are applied.
I still have a concern about the validity of the PPO implementation used in this work. After a quick run with github CleanRL implementation of PPO, I did not observe collapse for HalfCheetah and Hopper after 1M steps.
With the details provided by the authors, I realize that it should stem from the difference in implementation. On my side, the learning rate gradually decays to 0.1 * [init_learning_rate], and the variance vector is state-independent (I think this is a convention). The authors mentioned "We do not decay the learning rate (line 154) we are interested in agents that can ideally train indefinitely", which does not quite make sense to me as "train indefinitely" is meaningless after convergence or collapse in a single-task RL training process. And a proper learning rate decay (e.g., decay to 0.1 * [init_learning_rate]) should also allow the agent to learn within a long horizon.
> More discussion about evaluation environment choice
The authors mentioned,
"We selected the environments where the collapse was the most consistent to test multiple interventions while maintaining a reasonable compute budget".
According to Figure 8, it looks like the collapse is more severe in Ant and HalfCheetah than in Humanoid. This turns out to be a bit contradictory with the response above that Humanoid is used for evaluation but Ant and HalfCheetah are not included.
> More discussion about the collapse
The results in Figure 24 show that all the interventions fail to address the collapse in Hopper. And based on my personal experience, PPO with proper implementation does not collapse in MuJoCo tasks (as mentioned above). My main concern remains after the rebuttal.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for promptly engaging with our rebuttal and clarifying their main concern.
**We take the thoroughness of our implementation with utmost importance** and understand the concern of the reviewer. **We have been preparing a complete answer with sufficient experimental evidence to address your concerns.**
We address the implementation concerns in MuJoCo below, but first, we would like to highlight that **the main results in our paper are also shown with the Atari environments** and that we have included a strong replication of our results in Atari with the **CleanRL codebase. It exhibits the same collapse we observed with our implementation.** As the reviewer, we wanted to rule out the collapse happening because of any bugs or implementation details with as much confidence as possible.
In addition, note that we have referenced Dohare et al. (Overcoming Policy Collapse in Deep Reinforcement Learning, 2023), who also observed a collapse on MuJoCo.
> I did not observe collapse for HalfCheetah ... I realize that it should stem from the difference in implementation.
We understand the concern and have taken two actions to help the reviewer increase their trust in our results:
1. **Clarify the motivation behind our setting, replicate it on CleanRL, and observe collapse:**
In addition to sharing our code with the reviewers, we replicated it in CleanRL as we did with Atari so that it is easier for a reviewer familiar with CleanRL to inspect.
We would like to highlight that our setting is fully described in Appendix B.2, and its differences from **CleanRL stem from using recent implementation details for continuous action spaces since the original PPO paper** (as CleanRL is a faithful replication of it). Several works we cited have studied the implementation details of PPO and have given recommendations that shape how PPO agents are implemented these days (Andrychowicz et al., 2021; Engstrom et al., 2020).
**We believe that this setting is more relevant to the community than the initial implementation of PPO in 2017.**
A table with all the differences can be found below.
2. **We have taken the ClearnRL implementation used by the reviewer and applied minimal changes that resulted in a collapse.**
- Remove value loss clipping. This is an unnecessary trick that complicates the analysis (we are primarily interested in the actor). It is also not recommended by Andrychowicz et al. (2021)
- Make the standard deviation output of the action space dependent on the state. Since our research focuses on the problems related to state representation, the study inherently makes more sense if the standard deviation is state-dependent. **If the standard deviation were state-independent, it would not be influenced by the representation collapse we are investigating, thus undermining the core premise of our study.**
**With this, we have obtained similar collapsing curves as seen in our submission with and without an annealed learning rate.** We have communicated the implementations and training curves to the AC.
> The authors mentioned "We do not decay the learning rate (line 154) we are interested in agents that can ideally train indefinitely" …
Moreover, let us clarify the wording and context of our work, we acknowledge that we should have provided a more detailed explanation in our initial submission.
**We are interested in an online learning setting where the agent receives the experiences continuously, and it is not straightforward to determine when the learning process should terminate.** The continual and online learning aspect of our experimental setup makes it challenging to apply standard annealing schedules, and a primary goal of our research is to maintain the plasticity of representations. Therefore, we choose not to anneal the learning rate to better study the behavior of representation collapse under these conditions.
**The environments we use are single-task environments to ablate additional MDP non-stationarity, but they are complex enough for the agents to keep improving when trained for longer than our common benchmark limits.** In the tasks we have tried, the policies trained with a constant learning rate collapsed while apparently still improving and before stabilizing/converging.
**We do not claim that our approach is the only way to study this phenomenon**, and we acknowledge that annealing the learning rate could be a method to delay or prevent collapse.
In the additional runs we have provided, we observe collapse even when annealing the learning rate when training for long enough.
> Choice of eval environments
**We have communicated the results of all tasks to the AC. This should resolve this issue.**
We considered the severity of the collapse regarding both Tanh and ReLU activations. The default hyperparameters of Ant don’t collapse on either, and we favored Humanoid for its large action space over HalfCheetah, which seemed redundant to us with Hopper.
---
Rebuttal 2:
Comment: | Implementation detail | CleanRL’s default | ClearnRL's default adapted | Our implementation |
| ------------------------- | --------------------------------------------- | -------------------------------------------- | ---------------------------------------------------------------------- |
| Network output | State-dependent mean / State-independent std | same / State-dependent std | same / State-dependent std |
| Transformation of the std | exponential | same | Softplus (recommended by Andrychowicz et al., 2021) |
| Action distribution | Normal | same | TahnNormal (recommended by Andrychowicz et al., 2021) |
| Reward transforms | Normalize and clip | same | None (to keep the default non-stationarity) |
| Observation transforms | Running normalization and clip | same | Normalization at initialization (to keep the default non-stationarity) |
| Layer initialization | Orthogonal with custom scale | same | Default PyTorch initialization |
| Learning rate annealing | True | True, False (collapses for both) | False |
| Value loss clipping | True | False (out of scope) | False (recommended by Andrychowicz et al., 2021) | null | null | null | null |
Graph Learning for Numeric Planning | Accept (poster) | Summary: This paper proposed new learning-based methods for numeric planning. Numeric planning is formalized with the PDDL language. The proposed approaches are based on graph neural networks, and are evaluated in a lot of domains, e.g., blockworld, childsnack.
Strengths: The experiment section seems solid, and the proposed approaches are evaluated in lots of domains. The experiment results demonstrate significant improvement over the compared baseline.
Weaknesses: As there is only one closely related work on learning for numeric planning, it is hard to assess whether this is an important research problem.
The abstract has not summarized the novelty of this work, and the relationship between the two proposed approaches.
There are some missing related works about learning heuristics: evolution of Heuristics github.com/FeiLiu36/EoH.
Technical Quality: 2
Clarity: 1
Questions for Authors: Are the proposed two approaches suitable for different scenarios? Can the authors analyze this issue?
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: This paper has discussed the potential limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and suggestion for related work.
## Questions. (Applicability to Different Scenarios)
The proposed approaches are suitable for various different scenarios such as reinforcement learning (RL) and more general graph representation learning (GRL) problems.
- RL
Although we focused on the supervised learning setting in the paper, the proposed methods, namely the graphical representation of relational planning states and feature extraction algorithms, can be applied in the RL setting where a factored representation of states are given. More specifically, the graphical representation of the task and the GNN model/or simply a neural network operating on the features generated from the ccWL algorithm, can be applied with typical RL methodologies such as PPO and DQN. Furthermore, a model for the actions is not even needed and a simulator can even suffice as our methods only assume a symbolic representation of states and nothing about actions.
- GRL
We have also proposed a new graph kernel (ccWL kernel in Sec. 3.2) which can handle graphs with both continuous and categorical attributes. As mentioned in the paper, there are few graph kernels which handle non-categorical attributes and the ones that do are more statistical in nature. Our ccWL kernel is a new graph kernel that may be well suited for GRL tasks that require reasoning over logic such as knowledge graphs, as opposed to statistical reasoning as in common GRL benchmarks such as molecular datasets.
## Weaknesses
W1: You question the importance of research on learning for numeric planning on the basis of the fact that there is only one other published work on this problem (published in 2024). However, the reason for there being few approaches so far, is not that the research problem is unimportant, but that it is *new* and *difficult*. There are a lot of approaches for numeric planning (including the baselines we use), and planning competitions for numeric planning being run regularly since 2002. However these works do not use machine learning at all.
According to Turing Award winner Yann Le Cun, “learning to plan complex action sequences” is one of the top 3 challenges for deep learning – see for instance his invited talks at AAAI 2020 (47:20 on the youtube video) and AAAI 2024 on “Objective-Driven AI: Towards AI systems that can learn, remember, reason, and plan”. The first works using modern deep learning techniques that obtained some success with classical (non-numeric) planning date back from 2018, and so are relatively recent. Numeric planning is the natural next step in this important line of work. As we show, learning has the potential to improve the state of the art in numeric planning.
W2: We agree that the abstract can be improved and will do the suggested changes. Regarding the novelty of this work, this is the first work that learns heuristics for numeric planning – and one of the works in learning for planning that shows a clear improvement over the state-of-the-art. The current work on learning heuristics for planning is centered on classical planning where variables are Boolean.
The two proposed approaches (Graph Neural Networks and Graph Kernels) are the two common classes of ML methods that operate on graph data. They have their own respective advantages and disadvantages akin to those of deep learning vs classical machine learning. More specifically, GNNs are more well suited at handling large data and extracting latent features via backprop for structured data, while graph kernels benefit more from small training sizes and are much more lightweight in terms of parameters. We proposed using both approaches to be comprehensive in our evaluation on our inputs transformed into graphs. We will make this relationship and motivation for using both approaches clear in our abstract.
W3: The main common point between the work of Fei Liu and ours is the use of the word “heuristic” – yet with a different meaning. In planning and heuristic search, the meaning of the word “heuristic” is very precise: a heuristic is a function taking as input a state, and returning an estimate of the cost to reach the goal from this state. The meaning of the word “heuristic” is different in other areas, such as optimisation, where it means “approximate decision”, “rule of thumb”, or “strategy”. The paper you mention is about meta-heuristic optimisation, which is a branch of optimisation where one seeks to design general-purpose strategies to guide the search towards an approximate solution. This is only very loosely related to our work. There is an abundance of recent work on learning branching heuristics for mixed-integer programming, and other types of optimisation solvers, which again target very different types of problems. We are happy to mention them in the final version, taking advantage of the extra page available. | Summary: The paper proposes a new method for learning a heuristic function to guide search for solving numeric planning problems. In contrast to classical planning, the states in numeric planning may involve numeric variables while the state transitions are defined by mathematical expressions over these kinds of variables. In addition, numeric planning is computationally quite challenging to solve but it may be the right formulation in many interesting real-world applications. The proposed heuristic is learned from training data consisting of example planning instances and their corresponding optimal plans. Features are automatically extracted from a graphical representation of the planning instances and used subsequently by the machine learning models encoding the heuristic functions. The experimental evaluation is carried out on standard benchmarks for numeric planning. The results show clearly that state-of-the-art search algorithms guided by the proposed heuristics improve considerably over their competitors.
Strengths: - The paper targets an important problem in the area of automated planning and proposes more effective heuristics to guide search algorithms for solving these problems.
- The empirical evaluation is sound and covers standard benchmarks in the numeric planning domain. The results are presented in a relatively clear manner and therefore it is easy to understand the benefits of the proposed approach compared to existing state-of-the-art.
Weaknesses: - In my opinion, the presentation is the main weakness of the paper. While I believe it is a solid contribution for numeric planning, the way it is currently presented makes it hard to understand the details.
- The examples currently supporting sections 3.1 and 3.2 need to be expanded a little more. Right now, it is kind of difficult to follow them.
- Section 4 which describes the graph neural network is too high level and without a supporting example is very hard to understand.
- There is currently a relatively big disconnect between Sections 3, 4 and 5. Namely it takes a while to figure out that the features extracted by the proposed method are subsequently used to learn the heuristics. The current notation doesn't help much either. I also think that adding illustrative examples in Section 5 would clearly improve the quality of the presentation.
Technical Quality: 3
Clarity: 1
Questions for Authors: - The paper claims that grounding is not required for building the graphical representation of the planning problem. However, the graph shown in Figure seems to be partially grounded. So, how much grounding is needed to build such graphs?
Confidence: 4
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: The limitations are addressed clearly in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and suggestions identifying which parts of the paper can be made clearer.
## Weaknesses
We agree that the paper presentation can be improved and found it challenging to fit all of the material in the page limit. We will make good use of an extra page to address the presentation issues mentioned with more detail and illustrations. Please see the figure in the pdf of our global rebuttal for helping clarify some of our proposed methods as pointed in the final two points of your review.
## Question about Grounding
The short answer to your question is that we do *no* grounding when encoding what is given to us, i.e. a lifted planning task (lines 84-85), to a graph. More specifically, the grounded items you see (green, yellow, and red nodes) in Figure 1 are the minimal amount of information required to define a planning task. This information includes the current state, and the goal condition.
The choice of nomenclature of a “lifted planning task” is because our graphs need the first-order representation of the task, as opposed to the flattened representation, in order to be able to identify the object (blue nodes) and link them via edges to the predicates and functions that use them as arguments in the initial state and the goal.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications. | Summary: The paper tackles numeric planning problems by proposing two heuristics for numeric planning. The first one is based on graph kernels for graphs and addresses both continuous and categorical attributes. The second uses graph neural networks. The authors experimentally show the effectiveness of the two proposed algorithms by showing that they show better coverage compared to domain independent planners for numeric planning.
Strengths: - The paper shows experimentally that their proposed learned heuristics shows better coverage, and that is significant.
- The approach is novel and paper is also novel (except for the recent related work which the authors cover in the intro)
- The authors do a decent job with explaining the literature and citing appropriately.
- The use of classical machine learning due to being cheap and interpretable
Weaknesses: The paper presentation can be improved. Examples below (no particular order):
1. The abstract states “… in comparison to domain-independent planners” whereas it is more informative to state domain-independent numeric planners. (Other places did indicate numeric planners).
2. The paper can be dense in various places (section 5 for example)
3. Sometimes the authors miss details that can be helpful in understanding the paper. For example, what is M(3h||3n) can you explain? Maybe it would be good to give a bit of explanations on each of the configurations.
4. Fig 1, why x, y, z are not shown as blocks, the figure was confusing at first specially given that there are dotted block spaces above y and z with only 2 available blocks whereas the limit is 3 blocks.
5. In the Table2 caption, mention which ones are your proposed planners and the two variations (rank/cost).
6. minor typo: “Requires requires” repeated in section 3.1
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can you claim theoretically on admissibility, safe pruning, or tractability of the proposed algorithms? I find it strange that the authors refer to prior work re theory. At least a proof sketch can be given which refers to prior work.
2. Related to 1, can you prove the complexity on the WL algorithm (right before section 4 starts).
3. Can you please explain with an example how one should read Table 1.
4. Which planner is h^LMCUT used in. h^LMCUT is a heuristics, not a planner, right? Same with the other heuristics mentioned in the figure, not sure if it is accurate to refer to them as planners.
5. In section 8, you mention in the last sentence, one can learn forms of domain knowledge different from heuristic functions, can you give an example.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper covers that.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the suggestions and questions for helping clarify the paper.
## Weaknesses
We agree that the paper presentation can be improved and found it challenging to fit all of the material in the page limit. We will make good use of an extra page to address the presentation issues mentioned with more detail and illustrations.
The dotted boxes in Fig. 1 represent the location of the other blocks if the optimal plan was followed.
## Questions
Q1. Learning methods cannot guarantee any admissibility of heuristics. The learned heuristics are safe because predicted values are never infinite. The proposed learning algorithms are tractable because they make use of polynomial architectures. The only exception is the training criteria which may be intractable (e.g. solving a MIP optimally) but we can specify a timeout. The underlying search algorithms which we employ with our proposed learning algorithms are in the worst case intractable because (numeric) planning is in general intractable.
Q2. The complexity of the original WL algorithm is given in the original WL paper and our paragraph before Sec. 4 extends this proof to show the complexity of our ccWL algorithm is the same as that of the original WL algorithm.
Q3. The purpose of the table is to give an idea of how much larger / more difficult the testing problems are, in comparison with the training problems. As an example when reading the first row: The training problems for Blocksworld have between 2-11 blocks, while the testing problems have between 5 and 488 blocks. Furthermore, the optimal plan lengths (number of actions) for the training problems range from 2-34, while planners return between plans with lengths between 8-662.
Q4. You are correct that a heuristic itself is not a planner. However, the “heuristics” in Table 2 are shorthand for the configurations of planners described in Sec. 6.2 as the notation is cumbersome if the planners were also mentioned. For example h^{mrp}+hj and M(3h||3n) are both different configurations of the ENHSP planner. As mentioned in lines 287-288, h^LMCUT is used in Numeric Fast Downward.
We will make this clear in the figure and table captions.
Q5. One simple example is to learn an action policy instead of a heuristic. We further list below several examples and related work in planning where we can learn domain knowledge in forms different from heuristic functions. All these methods can be used with our ccWL features (Sec. 3.2.) since they are agnostic to the downstream ML task.
- policy rules [1], which are implication statements of the form (Condition($s$) -> Effect($s, s’$)) which tells us that an action $a$ should be applied in state $s$, leading to another state $s’$, if Condition($s$) and Effect($s, s’$) holds. Both the condition and effect can be some learned funcion over state features, such as those generated by the ccWL algorithm (Sec. 3.2)
- policy sketches [2], a generation of policy rules by viewing the learned (Condition($s$) -> Effect($s, s’$)) statements as subgoals rather than direct action policies
- portfolios [3] which learn what planner configurations work best for a given domain
- task transformations such as learning to partially ground problems [4] or ignore objects [5]
[1] Guillem Francès, Blai Bonet, Hector Geffner: Learning General Planning Policies from Small Examples Without Supervision. AAAI 2021: 11801-11808
[2] Dominik Drexler, Jendrik Seipp, Hector Geffner: Learning Sketches for Decomposing Planning Problems into Subproblems of Bounded Width. ICAPS 2022: 62-70
[3] Tengfei Ma, Patrick Ferber, Siyu Huo, Jie Chen, Michael Katz: Online Planner Selection with Graph Neural Networks and Adaptive Scheduling. AAAI 2020: 5077-5084
[4] Daniel Gnad, Álvaro Torralba, Martín Ariel Domínguez, Carlos Areces, Facundo Bustos: Learning How to Ground a Plan - Partial Grounding in Classical Planning. AAAI 2019: 7602-7609
[5] Tom Silver, Rohan Chitnis, Aidan Curtis, Joshua B. Tenenbaum, Tomás Lozano-Pérez, Leslie Pack Kaelbling: Planning with Learned Object Importance in Large Problem Instances using Graph Neural Networks. AAAI 2021: 11962-11971
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my questions. I have no further questions. | Summary: The authors introduce a method to generate features for planning tasks that involve numerical variables. These features can then be used with machine learning to learn a heuristic function from a set of training examples. Architectures used for learning include Gaussian processes and graph neural networks. The authors also introduce a method for learning to rank states and do a search based on the ranking instead of the cost-to-go. Results show that, for benchmarks modified to include numerical variables, learning a heuristic function with Gaussian process regression and ranking performs significantly better than planners that do not make use of a numerical representation.
Strengths: Ranking states instead of learning cost-to-go has shown promise. The paper presents a novel ranking method that can be combined with machine learning. The ranking method was significantly better than the corresponding non-ranking method. This could have broader implications for machine learning applied to planning.
Weaknesses: The only learning approach that performed better than the baseline planners was the Gaussian process regression with ranking, while Gaussian process regression with cost-to-go performed better than three out of four baseline planners. This makes it appear as if ranking is contributing to the overall success and not the numerical representation and learning.
Technical Quality: 3
Clarity: 3
Questions for Authors: Do any of these baseline planners make use of ranking?
Is it possible that the main increase in performance is due to ranking and not the numerical representation and learning?
On line 152, V is comprised of G, while, on line 151, it says that G is comprised of V. Do these two Gs represent different concepts?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The learning approach relies on supervised learning, which assumes a planner exists that can already solve problems and may limit performance to what the existing planner can solve in a given time limit. On the other hand, research using deep reinforcement learning does not assume the existence of any solver.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and questions for helping us identify where we could improve our paper’s clarity.
## Weaknesses clarifications
> This makes it appear as if ranking is contributing to the overall success and not the numerical representation and learning.
- We would first like to clarify that ranking is a learning concept. More specifically, we defined it as an optimisation problem using a Mixed Integer Program with *training data* with the aim of finding weights for the learning models. The simple analogy is that regression is a method for finding a function that best fits the training data, where one can define it as an optimisation problem with the MSE loss on some given training data.
- Secondly, we note that Gaussian process regression (GPR) and ranking does not go together. Instead, the description of the models in the review should be changed as follows:
> GPR with ranking
should be changed to "ccWL algorithm (Sec. 3.2) with ranking"
> GPR with cost-to-go
should be changed to "ccWL algorithm with GPR representing cost-to-go"
Please see the pdf of our global rebuttal for helping clarify the general pipelines of the models.
- Lastly, we note that the strongest planner baselines (M(3h||3n), M-FF and h^{MRP+hj}) are representative of the state-of-the-art for numeric planning. They use additional tricks such as helpful and macro actions, multi-queue search, and building novelty heuristics. We decided to not combine our work with these techniques as they are orthogonal methods for improving performance which are also applicable to our methods. As mentioned on line 313, direct comparison of our results is only possible with h^add and h^RMP. All of our learning configurations clearly dominate them.
We will make these details clearer in the additional page with extra explanations and illustrations.
## Questions
Q1. The baseline planners are state of the art heuristic search planners, which do not make use of ranking or learning.
Q2. Ranking is part of the learning process. Thus, without the numerical representation and learning, the models cannot employ ranking.
Q3. Thanks for clarifying this conflict of notation. You are correct that the Gs represent different concepts, and we will fix this.
The $G$ and $\mathcal{V}$ in line 151 refer to graph mentioned in line 124, while the $G$ and $V(g)$ in line 152 refer to the goals and the variables associated with the goal condition, where the $V(\cdot)$ notation is introduced in line 66. We will use different notations and make this clearer in the final version of the paper.
## Limitations. (Comparison with Reinforcement Learning)
We first like to mention that although the paper focuses on supervised learning, our methods are applicable to RL settings as well. More specifically, the graphical representation of the task (assuming a factored state representation) and the GNN model, can be applied with typical RL methodologies such as PPO and DQN. We decided to focus on supervised learning over RL because generating optimal plans from the provided training tasks generally take a matter of seconds, with a few outliers taking more than a minute.
Secondly, regarding pros and cons of RL, it has been shown in various works (e.g. the NeurIPS Flatland challenge [1], see RL vs non-RL winners, and a Deepmind paper on planning and RL methods for manipulation planning [2], where RL methods struggle to solve the simplest benchmark problems) that symbolic solvers outperform RL methods in scaling whenever a model is given.
[1] NeurIPS 2020 Flatland Winners. https://discourse.aicrowd.com/t/neurips-2020-flatland-winners/4010
[2] Ken Kansky, Skanda Vaidyanath, Scott Swingle, Xinghua Lou, Miguel Lázaro-Gredilla, Dileep George:
PushWorld: A benchmark for manipulation planning with tools and movable obstacles. CoRR abs/2301.10289 (2023) | Rebuttal 1:
Rebuttal: We thank all reviewers for their reviews and suggestions for improving our paper.
We noticed that the common weakness pointed out by reviewers is that our paper could make use of additional details or illustrations to better explain our methods. We agree and believe it was difficult to fit this in the page limit for the submission. We will make use of the extra page available for the final version to handle their suggestions.
Furthermore, we have attached a figure in the global PDF to visually summarise our proposed approaches and showing
1. how the separate sections and equations tie together into the final products, and
2. indicating where learning is involved.
Pdf: /pdf/990daf15296d512083cd39c79e66186f389f3872.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Network Lasso Bandits | Reject | Summary: The authors propose to use network Lasso to learn a multi-task bandit problem with given network structure. More specifically, the network structure has pre-defined unknown clustering structure, where within each cluster all the bandit tasks share the same model. The authors propose a bandit algorithm that can learn and provide a sublinear guarantee. The key difference of this paper with GOBLin in Cesa-Bianchi et al., 2013 is that this paper uses a network Lasso (or something like a group Lasso) penalty while GOBLin use a ridge penalty.
Strengths: Even though I think the authors overclaimed their contributions, which I will state below, I still feel it's meaningful to discuss network Lasso and design a bandit algorithm based on certain network structure, given the limited literature on multitask bandit. Compared to previous network bandit literature such as Cesa-Bianchi et al., 2013, this paper characterizes the network structure in more detail.
Weaknesses: 1. I think the authors need to provide more real-world examples to show why their network structure (and the correpondingly induced network Lasso bandit algorithm) is pratical, instead of stating their algorithm is good because it provides piecewise constant property in constrast to smoothness of GOBLin in Cesa-Bianchi et al., 2013. The key assumption in this paper is that the network structure is given, and the network can be splitted into connected clusters, within which the task parameters are the same. Can authors find or describe a couple of pratical examples/datasets where such a network exists, given that bandit is a very pratical problem?
2. The literature review that compare with the previous literature are not very accurate and sufficient IMO. For example, the authors mention Gentile et al., 2014 and Li et al., 2019 can cause overconfidence in constructing clusters. However, these algorithms do not have prior information about clusters such as a given network and thus they have to learn the clusters conditioned on the task similarities. In that sense, these algorithms are more pratical because oftentimes in practice network information is lacking. Here one should also add a related reference Context-Based Dynamic Pricing with Online Clustering by Miao et al., 2022. There are also robust multitask bandit algorithms (e.g., Multitask Learning and Bandits via Robust Statistics by Xu and Bastani, 2024) that can also solve the network bandit problem if the network structure follows certain assumptions; Multi-Task Learning for Contextual Bandits by Deshmukh et al., 2017 discuss a multitask bandit problem but use kernal based method. I suggest the authors add a more detailed literature review to discuss their paper's connection with these current multitask bandit algorithms.
3. Typically, greedy algorithm (the method proposed in this paper is greedy too due to Assumption 2) has better performance in bandit simulations compared to UCB-based algorithm (see Bastani and Bayati, 2020). Therefore, it's not a fair comparison in Figure 1, where all benchmarks are UCB algorithms. I think it's necessary to add the following benchmarks: OLS-Bandit or Lasso-Bandit (Bastani and Bayati, 2020) without task sharing, and Cella et al., 2023 which use low-rank structure to do multitask bandit, and also a few others mentioned above as a benchmark to show that the network structure indeed helps, fixing the difference due to UCB and greedy algorithms.
4. I feel it's a false claim that the regret bound in Theorem 3 "doesn't depend on the dimension" and it's due to the concentration inequality from Hsu et al., 2012. Intuitively, the regret bound should depend on the dimension unless one assume that the number of tasks in a cluster is d-dependent so the regret bound is smaller compared to a typical single bandit regret bound. I think the reason why here the bound seems to be d-independent is because the dimension $d$ is hiden in the problem-dependent parameter $\phi$. Since the authors assume the context $x$ has norm 1, the minimum eigenvalue of $E[xx^\top]$ should scale as $1/d$, and hence $\phi^2$. I don't think a typical tail inequality in Hsu et al., 2012 can improve the bound regarding the context dimension.
5. I think the asymptotic assumptions in Theorem 3 is incompatible with the finite-sample analysis in a typical bandit analysis and looks weird. I suggest the authors keep the isoperimetric ratio and centrality index as part of the regret bound (instead of forcing them out using the asymptotic assumptions), even though it might add an additional T-linear term due to the misspecification error caused by the inter-cluster edge connections. I feel that's the case because in the extreme case when each cluster has size 1, there will be misspecification error penalizing tasks connected the inter-cluster edges towards each other. I think it's totally fine to have such non-sublinear terms to provide a more comprehensive understanding of the limit of such network structures.
I am willing to raise my rating if the authors can solve my questions and concerns.
Technical Quality: 2
Clarity: 2
Questions for Authors: See my points above.
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Providing a real-world example
We would like to point out that the bulk of the contributions are theoretical. They lie especially in the establishment of the oracle inequality, and ensuring the RE condition for the empirical covariance matrix.
With that being said, the cluster structure is well-motivated in the literature, especially with a high number of tasks. Practical examples for this setting can be social networks, where muliple similar uses can be linked within a graph structure or personalized medicine, where the links in the graph are represented by physical proximity. Especially in the social network setting the number of users i.e. the number of nodes can grow very large, making traditional models infeasible. Under that setting, our work answers how to exploit a certain structure when it is available in addition to the clustering goal, which is a standard approach in multi-task learning. It can even be used as a way to compress the feature vectors data given a large dimension and a large number of users.
Concerning the comparison to methods that do not rely on a graph, we understand the reviewer's concern, but it is common in the literature to compare the methods that do not exploit a given structure. For example, the Lasso bandit papers of [1,3], the authors compare to the LinUCB algorithm, which relies on the principle of optimism in the face of uncertainty, despite the action sets being generated i.i.d.
## Inaccuracy and Insufficiency of the literature
We apologize for any inaccuracy or insufficiency in the literature, and we will take the references pointed out by the reviewer into account.
## Comparison to OLS bandits, and low-rank multi-task bandits
* OLS: we implemented a simple algorithm that uses the ordinary least squares estimator for each task independently.
* For the low-rank multi-task bandits [6], following the authors' recommendation, we implemented the *TraceNormBandit* algorithm and used the accelerated gradient method presented in [9]. Interestingly, its performance is most of the times second to our approach. This can be due to the fact that the cluster structure of $\mathbf \Theta$ can be mathematically written as $\mathbf{\Theta} = \sum_{C \in \mathcal{P}}\mathbf{1}_C\mathbf{\theta}_C^\top$, where $\mathbf{1}_C$ is the indicator vector of cluster $C$ (coordinates equal to 1 on the nodes belonging to $C$ and zeros elsewhere) and $\mathbf{\theta}_C$ is the true vector of every node in $C$. It is clear that the range of $\mathbf \Theta$ is equal to the span of $\{\mathbf{1}_C; C \in \mathcal{P}\}$, implying that its rank is at most equal to $\min(d, |\mathcal{P}|)$. It will then satisfy the low rank assumption for $|\mathcal{P}| < d$.
## Dependence on the dimension
We kindly invite the reviewer to look at the general rebuttal for a first answer on this issue.
In addition to that, we appreciate the reviewer's observation about the dimension that could be hidden in the $\phi$ factor, although $\phi$ does not necessarily correspond to the smallest eigenvalue. Also, we point out that we specify that only one regret term is independent of the dimension, not the whole regret itself. The term that is due to ensuring the RE condition for the empirical $\bf\Sigma$ matrix depends on it.
For the first part, we will instead write that our regret has better dependence in the dimension and $\phi$ than other LASSO-type contributions in the literature.
With that being said, we note that in the works of [1,2,3,4,5,6], the dependence of $\phi$ on parameter problems such as the dimension has not been discussed.
As a result, we will replace our claim after Theorem 3 with "Except for a possible dependence on the dimension hidden in the RE condition constant $\phi$", the part of the regret due to the empirical process does not depend on the dimension.
## Problem with asymptotic assumptions in Theorem 3
We will remove additional asymptotic assumptions from the regret, and point to them either as a comment or as a corollary. Our main aim was to simplify the regret bound. As for the linear dependency and the misspecification error, we do not understand where it can come from and we kindly ask the reviewer for clarification concerning that point. Indeed, our asymptotic assumptions on the graph do not change the horizon dependency, and our assumption on the relation between the horizon per task and the number of users does not change the fact that our regret is sublinear in the total horizon.
---
Rebuttal 2:
Title: Reply to the authors
Comment: I thank the authors for their responses and resolve some of my concerns in literature and empirical results. However, the rest of the questions are still not addressed well.
1. I still feel the authors' explanation about the regret dependence on the dimension is doubtful. Intuitively, as long as the bound is gap-independent, i.e., O(sqrt(T)), and there're no assumptions on model, e.g., sparsity on the parameters, the bound must depend on the dimension d. It could be possible due to the sharing structure that the d cancels out with the number of tasks that share the same parameter. However, I didn't see any assumption on the relation between d and the number of tasks in a clusters. Actually, from the explanation, now I feel it's actually all relevant terms of M that implicitly contains the depedence on d, e.g., the trace of $M \in \mathbb{R}^{d \times d}$ should be proportional to d. But this is not explained anywhere clearly.
2. My intuition about potential misspecification error that will be linearly dependent on the time T is as follows. The network structure here has both inter-cluster and intra-cluster edges. If the parameters within the same cluster are the same and the model allows heterogeneity across clusters, the penalty w_{m, n} \|\theta_m - \theta_n\|_2 will introduce extra bias if m and n do not belong to the same cluster. The trick the authors use to avoid this misspecification error is to use asymptotics assuming that inter-cluster edges are sparse. However, without this assumption and only with a static network assumption, i.e., the network is fixed, this will introduce linear term on T. I am curious after removing additional asymptotic assumptions how the authors can provide a bound that doesn't contain misspecification error, without assuming away the intra-cluster edges.
I'm happy for further discussion but would like more clear intuition and explanations on my questions. According to the above technical concerns, which is important for a theory paper, I won't change my score for now.
---
Rebuttal 3:
Title: About the dimension dependence and the possible linear regret (part 1)
Comment: We thank the reviewer for the raised issues, and we address them below.
## Dependence on the dimension
We restate that our regret does depend on the dimension logarithmically, such dependence being the result of ensuring that the restricted eigenvalue condition (Definition 2 and Assumption 4) holds for the empirical multi-task Gram matrix. The other part of the regret, the result of the oracle inequality bound, does not depend on the dimension. We kindly invite the reviewer to check our proofs in the supplementary material. To facilitate navigating through the proofs from Lemma 1 to Theorem 1, where we intentionally separated the results concerning deterministic inequalities (Lemma 1 and Lemma 2), and probabilistic ones (from Lemma 3 and Proposition 4). We provide the following explanation of every technical result's role:
* Lemma 1 follows from the optimality of $\hat{\mathbf{\Theta}}$ and the piecewise constant assumption on $\mathbf{\Theta}$. Indeed, the total variation norm of $\mathbf{\Theta}$ only comes from the differences between the parameter vectors across $\partial\mathcal{P}$.
* Lemma 2 bounds the total variation of the error signal by leveraging a graph-based decomposition, stated in Proposition 2 (where we decompose the identity matrix $\mathbf{I}\_{|\mathcal{V}|}$).
* Lemma 3 relies on Theorem 2.1 of [10]. It amounts to bounding the squared Euclidean norm of $\mathbf{X}\_{\mathcal{V}}^\top\mathbf{\eta}$, where $\mathbf{X}\_{\mathcal{V}}$ is the $t \times d|\mathcal{V}|$ matrix that is block-diagonal with blocks equal to $\mathbf{X}\_1, \cdots, \mathbf{X}\_{|\mathcal{V}|}$, the matrices obtained by concatenating the row context vectors encountered for every task. This is equivalent to bounding the quadratic form $\mathbf{\eta}^\top M\mathbf{\eta}$, where $M$ is the matrix we defined in the general rebuttal response, and that has size $t \times t$ rather than $d \times d$. Now, considering any matrix square root $N \in \mathbb{R}^{t \times t}$ of the PSD matrix $M$ (i.e. verifying $N^2 = M$), we obtain
$$ \Vert \mathbf{X}\_{\mathcal{V}}\mathbf{\eta} \Vert^2 = \mathbf{\eta}^\top M\mathbf{\eta} = \Vert N\mathbf{\eta} \Vert^2.$$
We make use of $N$ above only because the result in [10] is stated for square matrices. Their $\mu$ vector is null as shown in the proof of Lemma 3, since $\mathbf{\eta}$ is formed of centered noise coordinates. What remains is a part that solely depends on $M$, more precisely on the trace of $M$, the trace of $M^2$, and the spectral norm of $M$. We have $tr(M^2) \leq tr(M)^2$, simply because the left-hand side is the sum of squares of $M$'s eigenvalues, and the right-hand side is the square of their sum, and these eigenvalues are all nonnegative since $M$ is PSD. The spectral norm is the maximum of the eigenvalues and can be bounded by their sum, the trace. Hence, the only dependence of the bound on $\mathbf{X}\_{\mathcal V}$ is via $tr(M)$. We have
$$tr(M) = tr(\mathbf{X}\_{\mathcal{V}}\mathbf{X}\_{\mathcal{V}}^\top) = \Vert \mathbf{X}\_{\mathbf{\mathcal{V}}}\Vert^2\_F = \sum\_{m \in \mathcal{V}}\Vert{\mathbf{X}\_m}\Vert^2\_F \leq \sum\_{m \in \mathcal{V}} |\mathcal{T}\_m(t)|^2 \leq t^2,$$
where $\Vert \mathbf{X}\_m\Vert^2\_F$ is the sum of the squared Euclidean norms of context vectors encountered by task $m$, all of which are bounded by $1$ by assumption. In comparison, bounding the stochastic process for the Lasso for example [1,2,3,5] relies on bounding the infinity norm, which introduces the dimension via a union bound.
* Proposition 4 combines Lemmas 1,2,3
* Theorem 1 is the only place where we make use of our RE assumption, that assumption introducing two constants $\phi$ and $\kappa$.
* For $\phi$, in addition to our statement that no LASSO-type bandit paper discussed its dependence on $d$ or any other problem parameter, we point out that even in the offline learning literature studying the LASSO problem and its related problems, such a dependence is not considered. Still, we indicated that we will indeed mention a potential dependence of $\phi$ on the dimension.
* For $\kappa$, the interval to which it belongs depends only on the structure of the graph clusters and the graph boundary.
Concerning the reviewer's remark about the gap-independent bounds being dependent on the dimension, in our case that would be equivalent to studying the worst case of our setting, and stating an instance-independent bound. Here, one must specify what the bandit environment class is. If the environment contains for example all of the signals that are piecewise constant on the fixed graph, then maybe a dependence on the dimension might appear, but that is not what we study.
---
Rebuttal 4:
Title: About the dimension dependence and the possible linear regret (part 2)
Comment: We would like to point out that in references [3,5,6], all of which consider assumptions that are akin to ours, the dependence of $d$ in the $T$-dependent regret term is due to summing up the time-dependent regularization coefficient over time. This regularization term's choice follows from bounding their respective stochastic processes. All of their regularizations are norms ($L\_1$ and nuclear norm for the multi-task setting of [6]), whereas in our case we use a semi-norm. Hence, intuitively, the behavior of the bounds might differ. In addition to that, our semi-norm acts at the level of the nodes, not at the level of the dimension as it is the case in [6].
Other potential sources of dimension dependence might be the constants appearing in the relaxed symmetry and balanced covariance assumptions. However, none of such dependencies was discussed in previous work. We will point out such potential dependence.
## Potential linear reget due to misspecification
Our regret is based on a converging oracle inequality which holds with high probability. All the event probabilities required for it to hold are discussed in the proofs. The complementary events, which could potentially result in linear regret bounds, have vanishing probabilities associated with them, thus a linear term in the regret is ruled out.
In addition, our asymptotic assumptions in the regret affect two terms:
* Total horizon $T$, equal to $|\mathcal{V}|\bar{T}$, on which the regret depends sublinearly.
* Term $f(\mathbf{\Theta}, \mathcal{G})$ that does not have any dependence on time, hence no dependence on the horizon, and no possibility to cause a linear horizon dependence.
Such assumptions are not made to assume that inter-cluster edges are sparse, they are made to simplify the bound for the reader, and to offer more intuition on the dependence on the graph, but cannot introduce a linear dependence, as explained for the $f(\mathbf{\Theta}, \mathcal{G})$ term. Our RE assumption, and especially the part concerning the RE-norm, accounts for the inter-cluster variations. Indeed, it involves the $(1-\kappa)^+\Vert \mathbf{B}\_{\partial\mathcal{P}}^\dagger\mathbf{B}\_{\partial\mathcal{P}}\mathbf{Z}\Vert$ term which clearly depends on the boundary. However, for a well-behaving graph, we can assume that $\kappa>1$ (kindly refer to the response to reviewer QF4d, in which we express more intuition on this aspect), and even for $\kappa < 1$, no linear dependence is possible. Indeed, it is sufficient to examine the proof of the oracle inequality to see that it has a $\tilde O(\cfrac{1}{\sqrt{t}})$ dependence (where $\tilde O$ hides logarithmic factors), which results in the $\sqrt{T}$ part in the regret. | Summary: In this paper, authors work under the multi-task contextual bandit settings by representing the task correlations through the graph structure. To solve this problem, authors propose an algorithm that utilizes a linear regression formulation with Lasso constraint in terms of the node connectivity. Theoretical analysis as well as experiments against several baselines are presented to demonstrate the effectiveness of proposed method.
- The paper is generally well-written, with crisply clear descriptions of required assumptions, and the proposed solution is intuitive and well-motivated.
- Good empirical performances. Authors compare proposed algorithm with several clustering of bandits baselines, showing the effectiveness of the proposed method. The performance gain over existing methods is impressive.
- Novel theoretical analysis roadmap. Overall, the theoretical analysis pipeline is novel and the looks promising to me. With the additional introduced RE assumption, authors are able to improve the regret bound to $\tilde{O}(\sqrt{\bar{T}})$ instead of the vanilla time horizon.
- My major question is regarding the numerous assumptions required for the theoretical analysis. For instance, in Assumption 1, authors assume the candidate arms across different arounds are generated i.i.d. from a fixed distribution. This is different from existing clustering of bandits works, where the candidate arm contexts in each round is conditioned on previous observed arms. In this case, assumption 1 is somehow deviates from the actual applications of recommender systems where the candidate arms of each round are refined along with more information collected from the environment.
- For the experiments, authors have compared against multiple clustering of bandit works. In this case, it would be good if authors can include additional discussions comparing your theoretical outcomes with those of existing clustering of bandits works, which can offer more intuitive comparison with existing approaches.
Strengths: Please see my comments above.
Weaknesses: Please see my comments above.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to my questions above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please see my comments above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Assumption on the arm-generating process
Kindly refer to the general rebuttal where we address this point.
## Comparing the theoretical outcomes
We will take the advice and add comparisons of theoretical results to the used baselines. Here we will compare with previous works in clustering and multi-task learning:
From Theorem 3, our result yields a regret bound of $\mathcal{O}\left(\sqrt{\frac{\bar{T}}{c}}\left(\sqrt{|V|}+\sqrt{\log(\overline{T}|\mathcal{V}|)}+\sqrt[4]{|\mathcal{V}|\log(\bar{T}|\mathcal{V}|)}\right)+\frac{1}{A}\log(d|\mathcal{V}|)\right)$, where $c$ denotes the minimum topological centrality index of any cluster.
Compared to the result of Gentile et. al. 2014 (clustering approach) $\tilde{\mathcal{O}}\left(\left(\sigma\sqrt{d|\mathcal{P}||\mathcal{V}|\bar{T}}+ \sqrt{|\mathcal{P}||\mathcal{V}|\bar{T}}\right)\left(1+\sum_{j=1}^{|\mathcal{P}|}\sqrt{\frac{v_j}{\lambda|\mathcal{V}|}}\right)\right)$ (Note that log dependencies vanish in their notation) we provide a better dependency on the dimension. Also, our result does not depend on the number of clusters $|\mathcal{P}|$, instead, our result depends on the topological centrality index, since we do not aim to learn the cluster structure explicitly.
The result of Cella et. al. 2023 (multi-task setting) yields: $\tilde{\mathcal{O}}(|\mathcal{V}|\sqrt{r\bar{T}}+\sqrt{rd|\mathcal{V}|\bar{T}})$. Apart from the better dependency on the dimension, our result showcases a better dependency on the number of tasks $|\mathcal{V}|$. Here a convenient comparison would be between the first term of our bound $\sqrt{\frac{\bar{T}|\mathcal{V}|}{c}}$, which is not logarithmic and depends on $c$, and the first term in Cellas bound $|\mathcal{V}|\sqrt{r\bar{T}}$, which essentially amounts to the part of their regret, in which an oracle is aware of the low-rank structure. Here we could draw an analogy between the rank $r$ and the minimum topological centrality index of any cluster $c$. Under optimal conditions, $r$ would be low, and analogously $c$ would be large. Between these two non logarithmic terms our result showcases a better dependency on the number of tasks.
---
Rebuttal 2:
Title: Thank you for your response.
Comment: I thank authors for your detailed discussion in terms of theoretical comparisons with clustering of bandit works. Although I still think it can be strong to have the i.i.d. assumption for real-world bandit learning scenarios (maybe for this series of works in Lasso Bandits), I will keep my current positive score given authors' theoretical contributions. | Summary: This paper addresses the multi-task bandit problem using graph information. The given graph represents the relationships between tasks. Assuming that the preference vector of clustered tasks is constant, the problem is formulated as a network lasso problem to estimate the lasso estimator.
A modified restricted eigenvalue condition, commonly used in high-dimensional statistics, is defined to derive the oracle inequality for the network lasso estimator on non-i.i.d. data. The oracle inequality of the proposed network lasso estimator is derived under the assumption that the true multi-task Gram matrix satisfies the adapted RE condition.
Based on the derived oracle inequality, a greedy-type algorithm is presented, achieving $\sqrt{T}$ regret. Numerical experiments support the theoretical performance of the proposed algorithm.
Strengths: - The proposed algorithm efficiently learns task preference vectors by using graph information that encodes relationships between tasks. Specifically, it employs a network lasso estimator under the assumption that preferences within clustered tasks are constant, demonstrating its effectiveness in high-dimensional contexts.
- The paper adapts the restricted eigenvalue condition from high-dimensional statistics to the graph-based multi-task bandit setting. Based on the adapted RE condition, they established oracle inequality for network lasso estimator and showed that the proposed algorithm achieved $\sqrt{T}$ regert even though I haven't verified every proof in detail.
- The algorithm's performance seems robust even as the number of tasks and dimensions increase.
Weaknesses: - Since I'm not very familiar with the graph-based multi-task bandit setting, it may be that the concepts explaining the restricted eigenvalue condition (Def 2) are too heavy. It would be helpful to include comparisons or examples from existing RE conditions in high-dimensional statistics or high-dimensional contextual bandits to improve understanding.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. You mentioned that the proposed algorithm is cluster-agnostic, but if a graph representing the relationships between tasks is given, can't we identify which tasks are clustered? Compared to Gentile et al. (2014), it seems that the problem setting uses additional information. If the same information were given to Gentile et al. (2014), what advantages does the proposed algorithm have compared to not needing clustering?
2. Is the estimation error in the oracle inequality (Theorem 1) dimension($d$)-independent?
3. Can the restricted eigenvalue condition be transformed into a compatibility condition?
4. What is the definition of $\overline{Z}_P$?
5. The regret bound of high-dimensional contextual bandit using the Lasso estimator is proportional to the sparsity level (e.g., usually denoted by $s_0$) instead of the feature dimension $d$. What does sparsity correspond to in the network lasso instance, and how does it appear in the regret bound?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors have well-addressed the limitations in Appendix D.4 and further research directions in Section 7.
The content discussed in this paper appears to have little to no negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Explaining the RE condition and relating it to its counterpart in high-dimensional statistics
For our RE condition, if we do not restrict the signal $Z$ to the cone $\mathcal{S}$, and if we replace the RE norm with the Frobenius norm, we obtain a non-null minimum eigenvalue condition. If we assume that $\kappa>1$, then we are left with the Frobenius norm of $\overline{Z}_{\mathcal{P}}$. The latter represented the orthogonal projection of the signal $Z$ onto the space of constant signals per cluster. This is analogous to the fact that an RE condition in the LASSO case involves the Euclidean norm of a vector restricted to its coordinates in the sparsity set, which is its projection onto the space of vectors having null coordinates outside of the sparsity set.
## Identifying the clusters
Clusters can indeed be estimated using the graph. However, we will have to deal with their uncertainty, as mistaking the cluster to which a node belongs can cause estimation errors to accumulate. Instead, we do not rely on an explicit cluster estimation, and we rather use a regularization that uses it implicitly. To draw an analogy with the LASSO estimator, estimating the clusters explicitly that would be similar to estimating the set of indices of non-null features of the parameter vector in LASSO regression
## Advantage provided to the CLUB algorithm given the graph
The implementation of the CLUB algorithm from [7] takes a graph as an input, which is not necessarily the complete graph. That is what we ensured in our experiments, but still, our algorithm performed better.
## Dependence of the oracle inequality on the dimension
Kindly refer to the general rebuttal for this issue.
## Possibility of transforming the RE condition to a compatibility condition
Our RE condition can be transformed into a compatibility condition, as the compatibility condition is more general: it only requires the concerned quadratic form to be bounded from below by a constant multiplied by a convenient choice of norm. In that sense, the RE condition can be seen as a particular choice of "compatibility". The choice of such norm depends on the norms chosen for the Hölder inequality used to bound the inner product between our empirical process $\bf K$ and the error signal $\bf E$ (Lemma 3 and Proposition 4), where we used Cauchy-Schwarz for matrices (i.e. bounding by the product of Frobenius norms).
We hesitated between using a compatibility condition or an RE one, but we think that the RE one is more interpretable as it is a more straightforward generalization of the least eigenvalue or a function's curvature in general. We recommend reading [11] for intuition on how the RE condition points out at curvature in some directions in space.
## Definition of $\overline{\bf Z}\_{\mathcal P}$
We apologize for the typo of writing ${\bf Z}\_{\mathcal P}$ instead of $\overline{\bf Z}\_{\mathcal P}$. As mentioned in line 175 it is the signal obtained by replacing each node vector with the average vector of the true cluster containing it.
## Sparsity parameter counterpart in our case
In [1,2,3], the regret part that dominates in terms of the horizon dependency is proportional to $s_0$ the size the sparsity set. This part is obtained by summing up all of the bounds of the oracle inequality under well-defined good events. Applying that reasoning to our case, it is sufficient to look at the result of the oracle inequality (Theorem 1), and in particular at the term $f(\mathcal{G},\mathbf{\Theta})$ that we simplify in Theorem 3 using additional assumptions on the asymptotic behavior of the clustering.
For the sake of simplicity, let us first consider the case where $\kappa>1$. This case is intuitive as it is possible for example when the total weight of the signal boundary is negligible compared to the minimum topological centrality indices of clusters (that we denote by $c$ here to reduce the clutter): $w(\partial\mathcal{P}) \ll c$. Such a condition expresses some coherence between the graph and the clustering.
Under $\kappa>1$, we have $f(\mathcal{G},\mathbf{\Theta}) = \cfrac{a_2^2}{a_1 \sqrt{c}} + a_2$. Let us now take a look at the expressions of $a_1$ and $a_2$ given in Definition 2. On the one hand, $a_2$ decreases with the product $w(\partial \mathcal{P})\iota$, where $\iota$ here denotes the maximum inner isoperimetric ratio of a cluster in the graph (cf. Definition 1). On the other one, denominator $a_1\sqrt{c}$ grows when the ratio $\cfrac{w(\partial \mathcal{\mathcal{P}})}{c}$ decreases.
Intuitively, both $c$ and $\frac{1}{\iota}$ capture a notion of how "full" the clusters are, and their "fullness" or connectedness should dominate the total weight of the boundary for our approach to be beneficial, in the same way that the condition $d \gg s_0$ makes the LASSO approach beneficial.
For the case where $\kappa>1$, we will have an additive contribution of $w(\partial \mathbf{\Theta})$, which results in a benefit when $w(\partial \mathbf{\Theta}) \ll \phi$. We did not study the behavior of constant $\phi$ but it can be an interesting future research direction.
---
Rebuttal 2:
Comment: Thank you for the detailed responses to my questions. I have no further questions. | Summary: The paper introduces a multi-task contextual bandit algorithm that leverages a graph structure to model relationships between tasks. The algorithm assumes that the preference vectors of the tasks are piecewise constant over the graph, forming clusters. By solving an online network lasso problem with a time-dependent regularization parameter, the algorithm estimates the preference vectors, achieving a sublinear regret bound lower than independent task learning. Theoretical findings are supported by experimental evaluations against other graph bandit and online clustering algorithms.
Strengths: (1) The paper introduces a approach by incorporating graph structures to model relationships between tasks.
(2) The algorithm is supported by comprehensive theoretical analysis, including a oracle inequality and a regret bound.
(3) Extensive experiments validate the proposed method, showing that it outperforms existing baselines in terms of cumulative regret, highlighting its practical applicability and effectiveness.
Weaknesses: (1) The problem setting and algorithm presented are primarily adaptations of existing works, such as Oh et al. [2021]. The main difference is the inclusion of a graph matrix in the user preference vector, but this is not the first algorithm to incorporate a graph in contextual bandits, limiting the overall novelty.
(2) The i.i.d. assumption in contextual bandits is quite strong. Even in clustering approaches like CLUB, a conditional i.i.d. assumption is used. The current regret upper bound complexity is \(\sqrt{VT}\). There should be special cases where the algorithm can improve over \(V\) to demonstrate a more significant advantage.
(3) Since 2019, there have been many more works on clustering in bandits. The authors should conduct a broader survey to include these more recent works and relevant baselines. Using SCLUB, which is considered outdated, as a baseline, limits the comprehensiveness and relevance of the comparative analysis.
Technical Quality: 2
Clarity: 3
Questions for Authors: What are the unique challenges in regret analysis with Lasso regularization compared to L2 regularization?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Limited novelty
Despite the similarity in the technical tools used in the analysis to those in Oh et al. 2021, we respectfully disagree. Indeed, such techniques have also been used in [3,4,5,6] but we still faced the challenge of formulating a suitable RE condition (Definition 2), ensuring that it holds with high probability for the empirical Gram matrix (Theorem 2) and proving a novel oracle inequality (Theorem 1) that was not established even in the offline learning literature with i.i.d samples. Additionally, we have put additional effort into linking the analysis to the properties of the graph such as the total weight of the boundary, the maximum inner isoperimetric ratio of a cluster, and the minimum topological centrality of a cluster.
## I.i.d. assumption:
Kindly refer to the general rebuttal where we address this point.
## Broader survey for clustering of bandits and comparing to more baselines
We apologize for the lack of some references, and we will take them into account. As for the baselines, kindly refer to the ones we mentioned in the general rebuttal.
## Unique challenges in regret analysis with Lasso regularization compared to L2 regularization
We faced several challenges that would not arise in the case of the ridge(L2 squared) regularization. We apologize for not having been able to point out all of them in the main material as we were limited by the number of pages.
First, to the ridge regularization or regularization of its type (e.g. Laplacian regularization in [13]), there is no analytical solution to the optimization problem, which requires a completely different approach. It is similar in spirit to the difference between analyzing the LinUCB algorithm and the Lasso-type bandit algorithms [1-6].
Second, different from [1,2,3,5] which adapt the i.i.d case oracle inequality to the adapted case, we established a new one that can be of interest in the i.i.d. case in particular. We also point out at the challenges faced to formulate the RE condition and to ensure that it transfers well to the empirical Gram matrix, that we already mentioned when we addressed the limited novelty point. For the RE condition in particular, we had to make sure that our algorithm guarantees that the graph structure accelerates the estimation of the Gram matrix, as formalized in Proposition 7 using Definition 3, both in the appendix.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Can you clarify which assumption in references [3, 5, 6] corresponds to the i.i.d. arm context assumption?
---
Reply to Comment 1.1.1:
Title: On the the iid context set generation assumption in references [3,5,6]
Comment: Thank you for your question. The i.i.d context generation assumption in those references are stated as follows:
* Reference [3]: at the beginning of Section 2.2 titled "Generalized Linear Contextual Bandits" (end of the third sentence there), which states: "where the tuple $\mathcal X_t$ is drawn i.i.d. over $t \in [T]$ from an unknown joint distribution with probability density $p_{\mathcal X}$".
* Reference [5]: at the beginning of Section 3.1 titled "Model and Notation" (beginning of the third sentence there), which states: "The successive sets $(\mathcal{A}_t)\_{t \geq 1}$ form an i.i.d. sequence with distribution $p_A$".
* Reference [6]: at the last sentence of Assumption 1 under Section 3.1 titled "Stochastic Linear Contextual Bandits", which states: "We assume the tuples $\mathcal{D}_1, \cdots, \mathcal{D}_N$" to be drawn i.i.d. from a fixed unknown zero mean sub-Gaussian joint distribution $p$ on $\mathbb{R}^{Kd}$". | Rebuttal 1:
Rebuttal: We would like to express our deep gratitude to the reviewers for the substantial effort they put into reading and evaluating our work.
Upon recognizing that several reviewers have raised common concerns, we will address these in a general rebuttal. Specific responses to individual concerns will be provided separately for each reviewer. We kindly invite the reviewers to refer to the lists of references we provide at the end of this general rebuttal.
## Context generating process (reviewers wAUZ, Z89V)
We assume that the context sets are i.i.d. generated, and verify relaxed-symmetry and balanced covariance assumptions. These are standard assumptions that have been used for some time in the literature [3,5,6]. In [2], a different assumption stating that the arms within a set of actions are i.i.d. is used.
As for [7], the authors still require the context set elements to be i.i.d. sampled from a full rank process matrix with minimum eigenvalue $\lambda>0$ (Such an eigenvalue assumption is stronger than our RE assumption). Furthermore, we do not offer a clustering approach in contrast to other clustering algorithms, which aim to learn the cluster structure of tasks explicitly. Instead, we leverage the relevant cluster information implicitly, using an a priori available graph, in the same way that a LASSO estimator leverages the sparsity structure of the true parameters vector.
## Dependence of the oracle inequality (and hence the regret) on the dimension (Reviewers QF4D, 3afv)
In our regret bound (Theorem 3), we have :
* a part having a logarithmic dependence in the dimension, resulting from the need to ensure the compatibility condition for the empirical Gram matrix.
* a part that does not depend on the dimension, representing the bulk of the horizon dependence of the regret. This part is the result of summing up the bounds of the oracle inequality over time in the proof of Theorem 3.
As a result, to understand the absence of dependence on the dimension (except maybe for the constant $\phi$), we need to look at the establishment of the oracle inequality. To prove the latter, we had to bound a noise process of the form $\sum\_{\tau=1}^t x_\tau \eta_\tau$, which we treat rigorously in Lemma 3 of the appendix. There, we use the generalization of the Hanson-Wright inequality proven in [10] and that we mention in our paper in Theorem 4.
Using the notations of the proof of Lemma 3, denoting $ M = {\bf X}\_{\mathcal{V}}{\bf X}\_{\mathcal{V}}^\top $ which is a PSD matrix, the bound therein can be chosen to depend only on the trace of $M$, the square root of the trace of $M^2$, and the spectral norm of $M$. All of these quantities can be bounded by the trace of $M$, which in turn is equal to $\Vert{\bf X}\_{\mathcal{V}}\Vert_F^2$, at most bounded by $t$ since context vectors are assumed to have a norm at most bounded by $1$.
## Additional baselines (wAUZ, 3afv)
We added a comparison to the Trace-Norm bandit [6] and Local Clustering of Bandits [13]. We also added a comparison to OLS bandits with independent task learning.
# References
[1] Bastani, Hamsa, and Mohsen Bayati. “Online Decision Making with High-Dimensional Covariates.” Operations Research, Nov. 2019.
[2] Kim, Gi-Soo, and Myunghee Cho Paik. “Doubly-Robust Lasso Bandit.” Advances in Neural Information Processing Systems, vol. 32, Curran Associates, Inc., 2019.
[3] Oh, Min-Hwan, et al. “Sparsity-Agnostic Lasso Bandit.” Proceedings of the 38th International Conference on Machine Learning, PMLR, 2021.
[4] Cella, Leonardo, and Massimiliano Pontil. “Multi-Task and Meta-Learning with Sparse Linear Bandits.” Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, PMLR, 2021.
[5] Ariu, Kaito, et al. “Thresholded Lasso Bandit.” Proceedings of the 39th International Conference on Machine Learning, PMLR, 2022, pp. 878–928.
[6] Cella, Leonardo, et al. “Multi-Task Representation Learning with Stochastic Linear Bandits.” Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR, 2023.
[7] Gentile, Claudio, Shuai Li, and Giovanni Zappella. "Online clustering of bandits." International conference on machine learning. PMLR, 2014.
[8] Bühlmann, Peter, and Sara Van De Geer. Statistics for high-dimensional data: methods, theory and applications. Springer Science & Business Media, 2011.
[9] Ji, Shuiwang, and Jieping Ye. "An accelerated gradient method for trace norm minimization." Proceedings of the 26th annual international conference on machine learning. 2009.
[10] Hsu, Daniel, Sham Kakade, and Tong Zhang. "A tail inequality for quadratic forms of subgaussian random vectors." (2012): 1-6.
[11] Wainwright, Martin J. High-dimensional statistics: A non-asymptotic viewpoint. Vol. 48. Cambridge university press, 2019.
[12] Jung, Alexander. "Networked exponential families for big data over networks." IEEE Access 8 (2020): 202897-202909.
[13] Ban, Yikun, and Jingrui He. "Local clustering in contextual multi-armed bandits." Proceedings of the Web Conference 2021.
Pdf: /pdf/123e217809ca88fc5e24bad74d10bf780c97d9d1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Building on Efficient Foundations: Effective Training of LLMs with Structured Feedforward Layers | Accept (poster) | Summary: In order to improve the efficiency of Large Language Models, the authors explore the use of three structured approximations in the FFN blocks of the Transformer: LowRank, BlockShuffle, and BlockDense. They consider both pre-training and decoding, which have distinct requirements and bottlenecks and a range of sizes from 110M to 1.3B. Furthermore, they introduce self-guided training, which uses a dense matrix as a residual component, which is then annealed. The proposed method achieves a 2.5x speed-up at a 0.4 PPL increase under the same training FLOPs.
Strengths: - The research topic of improving the efficiency of LLMs and, therefore, making them more affordable is crucial and timely. The authors chose to study the FFN bottleneck, which gets worse as models scale, making the research even more relevant as LLMs become larger.
- The authors conducted extensive experiments and ablations, notably on the model size, pre-training tokens, FFN width, batch size, and learning rate. Additionally, they provided scaling laws.
- The experimental setup is common and modern. Notably, they used RefineWeb, RoPE, GeLU, FlashAttention, GQA, and tuned betas.
- The paper is well-written and easy to follow.
Weaknesses: - The models were trained between 2B and 25B tokens, amounting to a few thousand steps, which is not enough for a language model to converge even at a smaller scale. Therefore, one cannot conclude whether the proposed method is competitive or degrades the performance compared to the baseline at (or close to) convergence.
- There is no downstream task evaluation of the models. Only the validation perplexity is reported.
- The simple baseline where the FFN is not expanded (`d_model -> d_model -> d_model`) is missing.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The scaling law seems to indicate that the proposed approach leads to worse performance per FLOPs. If so, what is the benefit of the proposed approach?
- Can you conduct one experiment at 300B tokens to show that your methods remain competitive in terms of performance at (or close to) convergence? e.g., the best-performing structured approximation with self-guided training against a vanilla vaseline.
- Can you add the baseline where the FFN is not expanded as well as some evaluation on downstream tasks?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I would like the authors to acknowledge the limitations of the model size and number of tokens in comparison to state-of-the-art language models (such as LLAMA3), as their results may not be applicable at scale.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the thoughtful review and insights on this paper. We have added the necessary experiments as suggested and provided a detailed response below. We hope our response addresses the reviewer’s questions.
* **Q1**: The models were trained between 2B and 25B tokens, which is not enough for a language model to converge even at a smaller scale. Therefore, one cannot conclude whether the proposed method is competitive or degrades the performance compared to the baseline at (or close to) convergence. Can you conduct one experiment at 300B tokens ...?
**A1** : Thanks for your good suggestion. On the one hand, we follow the Chinchilla scaling law paper to allocate training tokens. This allows us to compare the scaling curves of these parameterizations with the optimal curve of the dense model and indicate their effectiveness at larger scales. On the other hand, to directly demonstrate that they maintain performance at the overtraining regime, we trained Transformers (110M) on 100B tokens within this short rebuttal period.
| Method| FFN size (M) | 2.2B Tokens (optimal dense scaling law) | 100B Tokens (overtraining regime)|
|-|-|-|-|
| | |Loss/ Perplexity | Loss/Perplexity|
|Transformer-s| 57|3.2569/25.67| 2.8143/16.68|
|LowRank | 21|3.3748/29.22|2.9256/18.65|
|BlockDense| 21| 3.3731/29.17|2.9239/18.61|
|BlockShuffle | 21| 3.3994/29.95|2.9413/18.94|
By comparing the performance between models trained with 2.2B and 100B tokens, it can be seen that these structured parameterizations maintain or slightly reduce the loss gap on 100B tokens, indicating that they’re still competitive at convergence.
* **Q2** : I would like the authors to acknowledge the limitations of the model size and number of tokens in comparison to state-of-the-art language models (such as LLAMA3), as their results may not be applicable at scale.
**A2** : Thanks for pointing this out. We will definitely add the limitation in the revision that we didn’t investigate models in this paper that are comparable to today’s practical LLMs, such as LLaMA-3. This is not only because of the limited computing resources but also because this study is to start investigating structured parameterizations of linear layers in modern LLM architecture training. We hope our findings and solutions about scaling, efficiency, and optimization will push their usage on the industry side and in future work.
* **Q3**: There is no downstream task evaluation of the models.
**A3**: We train the Transformer-s models on 100B tokens and evaluate their zero-shot downstream performance. Table 1 in the response PDF shows that the results are consistent with the validation perplexity of pre-training. Structured parameterizations with 32% FFN parameters incur only about 0.9-1.4 accuracy loss on the downstream evaluations. Besides, we also provided similar good results of Transformer-xl trained on 26B tokens in Table 2 of the response PDF and the self-guided training consistently improves downstream performance by reducing training loss.
* **Q4**: The scaling law seems to indicate that the proposed approach leads to worse performance per FLOPs. If so, what is the benefit of the proposed approach?
**A4**: Compared to the dense model (optimal trade-off), structured matrices have the advantage of utilizing training FLOPs more effectively, potentially reaching lower loss with fewer parameters (see Figure 1 and Table 2 in the original paper). We will clarify this point more clearly in the revision. Specifically,
* In Figure 1 of the general response PDF, we apply a linear fit to the scaling points for better illustration. We train the dense and structured models on the same amount of tokens. By fixing the training FLOPs, structured matrices have fewer parameters and eventually achieve very close or even slightly better (e.g., LowRank) loss in Figure 1. Given their steeper scaling curves, we can also expect noticeably lower loss and fewer parameters for structured parameterizations per FLOP when the x-axis is further extended.
* In the "Wide and Structured Network" section of the paper, we also apply existing efficient techniques to the attention module to further optimize the use of training FLOPs. In most experiments, we only structure the FFN module to simplify the study, which negatively increases the attention module's impact on the overall architecture. By making the whole network wide and structured, we demonstrate that even with a medium-sized Transformer (335M), we achieved 0.5 lower perplexity with only 252M parameters under the same training FLOPs.
Moreover, the scaling curves of the structured matrices can be further optimized by finding a better trade-off. The good scaling behavior makes them strong candidates for future architecture design.
* **Q5**: The simple baseline where the FFN is not expanded (d_model -> d_model -> d_model) is missing.
**A5** : We understand that this baseline is to reduce the intermediate size of the FFN, resulting in a smaller FFN as well. However, we chose not to pursue this approach because our investigation focuses on structured linear transformations (e.g., low-rank or block-diagonal matrices), as stated in the introduction.
* First, these structured matrices serve as plug-and-play replacements for the dense layer. This allows us to extend our findings to dense layers in other architectures easily. Also, maintaining the input and output dimensions doesn't affect the design of other components.
* Second, techniques like pruning cannot function as plug-and-play transformations, as they may require complex dimension changes and impact other components in the network. Moreover, they fall outside the scope of our current context and are left for future work.
In summary, we excluded FFN changes to simplify our explorations. We believe these changes are also orthogonal and can be combined with the structured linear transformations.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal. Overall, I am pleased with the answers and clarifications provided, and I’ve adjusted my score accordingly. I am not convinced by the arguments provided against the baseline where the FFN is not expanded, that is, the FFN remains dense with both layers set to the same size (d_model = dim_feedforward). I strongly recommend that the authors include this baseline.
**A1.** I am satisfied with the additional experiments at 100B tokens, given the computational cost of pre-training.
**A2.** I appreciate the commitment.
**A3.** I am pleased with the downstream evaluations, considering the limited time for the rebuttal, although I still believe additional experiments are necessary to make a strong case.
**A4.** Thank you for the provided figures on the scaling law. It is now clear to me that structured FFNs could outperform dense FFNs at larger scales.
**A5.** I disagree with the response on this point. I believe it is important to include a baseline where the FFN remains dense, as in the vanilla Transformer, but the second layer is the same size as the first. This baseline requires only setting `torch.nn.Transformer(d_model=512, dim_feedforward=512, ...)` in PyTorch. This baseline requires no implementation since only a single parameter needs to be changed, and the FFN remains structured (dense).
---
Reply to Comment 1.1.1:
Title: Response to Reviewer SNmM
Comment: Thanks for your quick reply and raised score. We ran the experiments that the reviewer asked and added the validation loss/perplexity results below. Baseline 1 indicates the standard Transformer with (d_model -> 4 * d_model -> d_model) FFN dimensions. Baseline 2 indicates (d_model -> d_model -> d_model) for FFN module.
|| Transformer-s | Transformer-m | Transformer-l | Transformer-xl | Loss gap between -xl and -s | Slope of the scaling curve |
|-|-|-|-|-|-|-|
| Baseline1 (100% FFN Params.) | 3.2569/ 25.97 | 2.9062/18.29 | 2.6594/14.29 | 2.5226/12.46 | 0.7343 | -0.3549 |
| Baseline 2 (25% FFN Params) | 3.3695/ **29.06** | 3.0402/20.91 | 2.7862/16.22 | 2.6470/ 14.11 | 0.7225 | -0.3636 |
| LowRank (32% FFN Params. | 3.3748/29.22 | 3.0251/ **20.60** | 2.7527/ **15.69** | 2.6062/ **13.55** | 0.7686 | -0.3852|
* Firstly, LowRank performs much better at larger scales compared to Baseline 2, with a bigger loss gap between -xl and -s models and a steeper scaling curve. We think this is because reducing the intermediate dimension directly reduces the parameters, while the structured parameterization of the dense layer, like LowRank, is an approximation of the dense transformation. Thus, LowRank seems to perform better as the scale of the model increases.
* Secondly, we think that the reason Baseline 2 works slightly better than LowRank in the -s size is that, at small model scales, the loss caused by its optimization challenges cannot be completely mitigated by the benefits of better structural design. By alleviating the optimization challenges (e.g., applying self-guided training in the first half of the total training), performance improves from 29.22 to 28.02.
Finally, structured parameterization is a flexible technique that can replace any dense linear layer directly without altering its shape and can also be combined with other methods like reducing the intermediate state size as Baseline 2. | Summary: This paper mainly focues on using structured matrices to substitute dense matrices in FFNs for training from scratch tasks. The authors propose BlockDense as a combination of low-ranked dense and block diagonal matrices (Figure 2), and to address the loss spike issues in low-ranked training process, the authors propose the "self-guided training" where we combine the efficient training with parameterized linear dense layer as $o = \alpha W x + (1 - \alpha) U (V x)$ to regularize / alleviate the optimization difficulties from saddle points and pathologies.
The authors conduct experiments on transformers with sizes from 110M to 1.3B and RefinedWeb dataset. The authors focus on the latency vs. batch size, width, etc. for online decoding and validation loss vs. FLOPs. The baselines are low-ranked matrices, block shuffle (Monarch decomposition) matrices, and dense matrices for FFNs training, and the authors show Block Dense in general achieve lower PPL than Block Shuffle on the same training budget (training FLOPs).
Strengths: This paper use structured matrices as a hardware-friendly efficient training method. Comprehensive latency benchmarking results are quite informative for deploying the BlockDense (or other methods) in practice.
This paper also provide comprehensive training-from-scratch experiments with medium-sized transformers.
The writing is clear to follow, and Figure 2 is quite helpful for understanding BlockDense parameterization.
Weaknesses: Self-guided training is effectively adding an regularization term that combines $U$ and $V$, and we need to compare it against other regularization approaches (e.g. spectral regularization) to position the effectiveness of self-guided training with related works. Such comparison is missing from the paper (missing on both Figure 3 and Table 3) and it is hard to assess the effectiveness of self-guided training as a regularization method.
In addition, I don't find the advantage of BlockDense over low ranked matrices. It seems that low-ranked matrices also have low latency (Figure 4) and low validation loss / perplexity on the same training budget (Table 3). The optimization challenge of low-ranked matrices can also be alleviated by self-guided training according to Figure 3.
It is also hard to identify whether the scaling law of BlockDense is necessarliy better than the dense matrices in Figure 1a. The orange / green line appear to be nearly parallel to the purple line in both $10^{18}-10^{19}$ and $\sim 10^{20}$ segments. More quantitative analyses are needed to justify the statement on line 52 "Interestingly, scaling curves in Fig. 1a indicate that structured matrices yield steeper loss scaling curves than the traditional Transformer at its optimal trade-off".
I currently consider the first 2 weaknesses as major (insufficient comparison and justification) and I vote for borderline reject. I am happy to raise my scores if the above concerns are addressed during the rebuttal period.
EDIT: upon reading the rebuttal, I raise my score to the borderline accept.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could you compare self-guided training with other regularization methods?
Could you justify why BlockDense is a better efficient training method than low-ranked training?
Could you provide more quantitative analyses on the scaling laws of block dense, low-ranked matrices, block shuffle, and dense matrices (actual slope, confidence interval, etc.)?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: There are no other limitations. All weaknesses have been listed above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for providing a constructive review and detailed comments.
Before responding in detail, firstly we would like to clarify our paper's focus. We investigate the performance of three structured matrices in modern LLM training from efficiency, optimization, and scaling aspects and provide concrete ablations to improve our scientific understanding of these methods. As a result, 1) we explore BlockDense to cover a broader space of possible parametrization, but not claiming that it is the best candidate, and we found that it underperforms when compared to LowRank in some experimental settings; 2) the challenges and proposed solutions (e.g., self-guided training, pre-merge, and scaling curves) are found or proposed for all the parameterizations.
* **Q1**: In addition, I don't find the advantage of BlockDense over low ranked matrices. It seems that low-ranked matrices also have low latency (Figure 4) and low validation loss .... The optimization challenge of low-ranked matrices can also be alleviated by self-guided training.
**A1**: As stated earlier, this paper is not claiming that BlockDense is the best candidate but aims to investigate the performance of three structured matrices from several aspects within the FFN module of Transformers. Our findings and solutions like self-guided training also work for all of them.
We propose BlockDense for several reasons:
* It is a natural intermediate between LowRank and BlockShuffle, combining low-rank projection with a block-diagonal matrix.
* It shows similar results to LowRank in our various latency tests and has good validation loss, which helps it to cover a broader space of possible parameterizations.
* This parameterization might be more beneficial in other domains or architectures, as we later found that BlockShuffle works better in vision tasks.
* **Q2**: Self-guided training is effectively adding a regularization term that combines U and V, and we need to compare it against other regularization approaches (e.g. spectral regularization) to position the effectiveness of self-guided training with related works. Such comparison is missing from the paper (missing on both Figure 3 and Table 3) and it is hard to assess the effectiveness of self-guided training as a regularization method.
**A2** : We provide the clarification about self-guided training and comparison with other regularization techniques below.
* We would like to first clarify that self-guided training is not a typical regularization technique. It does not explicitly constrain or normalize U and V but leverages the dense matrix W to shape the representation learned by structured matrices. Specifically, learning W is unaffected by the additional saddles and pathologies introduced by the structured parametrization, allowing it to learn faster by discovering good specialized features. Then, by decaying the residual contribution of the dense matrix, W can guide the training and transfer the learned hidden state semantics to U and V gradually.
* We initially considered weight normalization to stabilize training but found it limited in this context, and layer normalization increases latency and slows down the training convergence. In the table below, we also provide a comparison with spectral normalization [1] (we turn to spectral normalization because we found spectral regularization is for generalizability rather than stability) and orthogonal regularization [2]. We hypothesize that these techniques are less effective because they are designed to ensure the backpropagated signal does not vanish or explode, and two of them constrain the weight much. However, the challenge of structured parameterization is not only about signal propagation but also the capacity bottleneck of learning a good representation, which the typical regularization techniques can't solve. For the constraints, the spectral normalization directly scales the trained weight by its largest singular value, which may hurt the performance here. The additional term in the orthogonal regularization also distorts the spectrum.
| LowRank (32% FFN params.) | Perplexity |
|-|-|
| Baseline| 29.22|
|Self-guided training| **28.02** |
|Weight normalization | 29.12 |
|Spectral normalization | 31.93 |
|Orthogonal regularization | 29.18|
In the papers' appendix, we compared self-guided training with other techniques. We will also emphasize this discussion in the revision.
[1]. Spectral Normalization for Generative Adversarial Networks.
[2]. Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?
* **Q3** : It is also hard to identify whether the scaling law of BlockDense is necessarily better than the dense matrices in Figure 1a. ... Could you provide more quantitative analyses on the scaling laws (actual slope, confidence interval, etc.)?
**A3** : Thanks for your good suggestion. To make the illustration clearer, we applied the linear fit to the scaling results and provided them in the general response PDF. Based on Figure 1 in the PDF, we can calculated the slope and obtained the results outlined in the table below:
| Method | slope|
|-|-|
| Dense (optimal scaling law) | -0.3549 |
| 63% FFN params. | |
| LowRank | -0.3672 |
| BlockDense | -0.3673 |
| BlockShuffle | -0.3718 |
| 32% FFN params. | |
|LowRank | -0.3852 |
|BlockDense| -0.3827 |
|BlockShuffle| -0.3881 |
The table shows that these structured parameterizations all have larger absolute slopes than the dense model at its optimal trade-off. This brings a smaller loss gap in the 1.3B model compared to the 110M models, indicating that structured parameterizations scale well or even better than the dense model. Additionally, their scaling curves for our structured parameterizations can be further optimized by finding a better balance between model size and training tokens.
---
Rebuttal Comment 1.1:
Title: Response to the authors
Comment: Thanks for the rebuttal and the additional results! I have read the rebuttal and the replies to other reviewers.
Q1 My concern is that since BlockDense is an intermediate between LowRank and BlockShuffle, and the performance & latency (even scaling law) results might not reach that of LowRank, the benefits of BlockDense are not sufficiently clear. Could you elaborate on the potential benefits that BlockDense would introduce in the vision tasks?
Q2 The result of spectral normalization is great, but it would be more helpful to know whether the training loss spikes still exist for spectral normalization on low rank matrices (as the self-guided training is more oriented to training stability).
Q3 Thanks for the new results! The figure 1 in the author rebuttal is much clearer now.
At this moment, my Q3 and (half of) the Q2 are well addressed. I will raise my score to borderline accept.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 5PEQ
Comment: We would like to thank the reviewer for their swift response to our rebuttal. Below we further elaborate on our response to address the remaining concerns in Q1 and Q2 completely.
* *For Q1*, we introduced BlockDense to cover a bigger parameterization space because it has consistent latency results with LowRank and achieves slightly better accuracy results than BlockShuffle within the FFN module and NLP tasks. Moreover, to show its benefits more clearly:
* Though the main focus of this paper is not to determine which method is the best, but rather to investigate their common problems, we find that this can be related to the data domain. Unlike in the FFN module of NLP tasks, we observe that block-diagonal matrices perform better in vision tasks. For example, the table below shows the ViT-small performance on the CIFAR-10 dataset. It can be seen that BlockDense surpasses LowRank by 0.6 points and it is much faster than BlockShuffle. We think this is because vision tasks tend to prefer locality due to the local correlations in the pixel space and block-diagonal matrices provide a more suitable inductive bias for that.
|| Model Params (M) | CIFAR10 Acc |
|-|-|-|
| ViT (H=384) | 21.3 | 92.5|
|LowRank | 8.3| 89.6|
|BlockDense | 8.3| 90.2 |
| BlockShuffle | 8.3 | 90.4 |
* The BlockDense parameterization can be seen as a generalization of the LowRank parameterization. For example, by simply setting B=1 for the first matrix, one can recover the low-rank parameterization. Expressing them in the same parameterization can allow us to explore hybrid and mixed structures more easily. As this paper’s aim is to explore their common problems, we leave exploring different hyper-parameters of BlockDense as the future work.
* *For Q2*: We found that there are no training spikes with spectral normalization. However, it constrains the weights by scaling them via the spectral norm. Also, as we stated in our first response, the main problem is the capacity bottleneck of learning a good representation. The self-guided training allows the dense weights to transfer its learned hidden units' semantics to structured matrices, which suffer from symmetry problems during the feature specialization phase. As can be seen from Figure 3 in the submitted paper, it helps with both the slower convergence and loss spikes. This point, as well as non-constraints on weights, makes our technique different from and better than classical regularization methods.
---
Rebuttal 2:
Title: Response to Reviewer 5PEQ
Comment: Thanks for your quick reply and suggestions.
* For Q1, yes, we’ll add the table of vision results and also discussion into the paper to help readers build a better understanding of different parameterizations. For the FFN module in NLP tasks, the BlockDense still needs to be further explored as we stated during the rebuttal and in our paper. As a result, we didn’t claim the proposed BlockDense method as the main contribution of this paper. Also, we tend to keep it in the paper to cover a bigger parameterization space, and it’s a generalizable version of LowRank. Given its good performance, we think keeping it can be more informative at least as an ablation to LowRank parameterization.
* For Q2, the training instability arises from the deep linear form U(Vx), which introduces additional symmetries and thus a more complex loss landscape to optimize over [1, 2]. To illustrate the relationship between loss spikes and capacity loss, we show in the table below that models with high rank suffering from severe loss spikes can even perform worse than models with very low rank. This table represents our early experiments where we first observed loss spikes with LowRank on the CIFAR-10 dataset while sweeping learning rates. We highlight poor results in bold. For higher ranks, we found it easier to experience severe loss spikes at very large learning rates. For instance, the Rank 256 model shows an accuracy of 87.50, which is worse than the worst result for Rank 8. The Rank 8 model is 32 times smaller than Rank 256 and has poor results due to slower convergence. However, the performance can still be better than the Rank 256 with severe loss spikes.
| lr | 1.0e-4 | 2.5e-4 | 5.0e-4 | 7.5e-4 |
|----------------|------------|------------|------------|------------|
| Rank 4 | **87.49** | **89.77** | 91.22 | 90.02 |
| Rank 8 | **88.42** | **90.47** | 90.89 | 90.55 |
| Rank 16 | **88.81** | **91.07** | 92.31 | 92.21 |
| Rank 128 | 91.28 | 93.14 | 93.73 | **89.10** |
| Rank 256 | 91.95 | 93.60 | 93.16 | **87.50** |
| Dense 384| 91.86 | 93.67 | 93.66 | 93.36 |
This is a very good question that we were also curious about investigating, especially early in the project. The results we provided in the table above are very preliminary results that we obtained on CIFAR10 using VIT architecture. Let us note that other hyperparameters of this table and, thus the dense results are not the same as that in our last response because they’re very early results. As we discovered that the self-guided training addresses the optimization challenge without tuning hyper-parameters, we decided not to include these results in the submitted version of our paper. However, we will include a more elaborate discussion about the loss spikes and training instability in the camera-ready version of the paper, with the detailed results presented in the appendix.
[1]. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks
[2]. Neural networks and principal component analysis: Learning from examples without local minima. | Summary: The paper studies efficient Transformer variants. Unlike most existing works on efficient attention, this work proposes methods to enhance the efficiency by focusing on feedforward networks (FFNs). It explores several efficient linear layer designs, and proposes techniques to address the training issues and decoding efficiency for practice. The experiments show that these structured FFNs not only reduce computational costs but also show promising scaling behavior on language modeling.
Strengths: - Paper is clearly written and well motivated.
- While there has been voluminous literature on efficient attention using structured matrices, making FFNs efficient using structured matrices is new to the community.
- This paper discuss interesting techniques to address optimization difficulty in training Transformers with structured matrices in FFNs.
- The experiments covers models with different sizes and demonstrate the scaling behavior. The experiments also clearly demonstrate the efficiency gains in practice.
Weaknesses: - The experimental study can be made more solid if the authors can additionally provide model quality analysis on downstream tasks (e.g., SuperGLUE) and the finetuning regime.
- The second part of the related work section could be enhanced by including additional studies on structured matrices in Transformers, emphasizing that the majority of existing efforts focus primarily on the attention module. To name a few:
- Lee-Thorp, James, et al. "Fnet: Mixing tokens with fourier transforms." arXiv preprint arXiv:2105.03824 (2021).
- Luo, Shengjie, et al. "Stable, fast and accurate: Kernelized attention with relative positional encoding." Advances in Neural Information Processing Systems 34 (2021): 22795-22807.
- Choromanski, Krzysztof, et al. "From block-Toeplitz matrices to differential equations on graphs: towards a general theory for scalable masked Transformers." International Conference on Machine Learning. PMLR, 2022.
- Minor issues.
- Lines 4 & 71: transformer $\to$ Transformer
- Line 168: alpha $\to$ $\alpha$
Technical Quality: 3
Clarity: 3
Questions for Authors: Is there any special technique to initialize the matrices U and V? What is the scale/variance of the initialization?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors discuss the limitation of this paper in Sec. 5. I believe that another limitation is that the paper lacks evaluations on downstream tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the valuable suggestions and careful reading of this paper. We will fix the minor issues in the revision and list the detailed responses to the questions below. Hope our reply can address the concerns.
* **Q1**: The experimental study can be made more solid if the authors can additionally provide model quality analysis on downstream tasks (e.g., SuperGLUE) and the finetuning regime.
**A1**: Thank you for your valuable suggestion. We have added the results of downstream performance in Table 1 and Table 2 in the general response PDF, as well as the fine-tuning performance in the table below.
* We use the lm-evaluation-harness repository for downstream tasks like PIQA and HellaSwag. To achieve good downstream performance, we train small-sized Transformers on 100B tokens. Table 1 in the general response PDF shows the consistent performance of these zero-shot tasks with the validation perplexity of training. All structured parameterizations with 32% FFN parameters have results close to the dense models (e.g., a 0.9-1.4-point averaged accuracy decrease). Additionally, we also provide the results of Transformer-XL trained based on the optimal scaling law in Table 2 in the response PDF, showing good results for structured matrices and consistent improvement with self-guided training.
* We applied fine-tuning to our trained models (small-sized Transformers on 100B tokens) using the famous transformer repository. As GLUE is better supported in this codebase than SuperGLUE, we evaluated two tasks : QQP and SST-2 in the GLUE benchmark within this short rebuttal period. As can be seen below, the structured parameterizations show very comparable accuracy performance to the dense model (e.g., 0.3 lower accuracy of BlockDense on QQP task) with only 32% parameters of the FFN module.
| Method | Validation PPL | QQP (acc) | SST-2 (acc) |
|----------------------|----------------|-----------|-------------|
| Transformer-s (110M) | 16.68 | 90.0 | 92.0 |
| 32% FFN params. | | ||
| LowRank | 18.65 | 89.6 | **92.2** |
| BlockDense | **18.61** | **89.7** | 91.7 |
| BlockShuffle | 18.94 | 89.2 | 91.5 |
To conclude, the downstream and fine-tuning performance consistently shows the strong potential of these structured matrices.
* **Q2**: The second part of the related work section could be enhanced by including additional studies on structured matrices in Transformers, emphasizing that the majority of existing efforts focus primarily on the attention module.
**A2**: Thanks for the good suggestion. We'll add these papers to the related works. Yes, they are all related and very interesting papers that apply structured matrices, like Toeplitz, over the sequence dimension, thus making the attention module very efficient. In contrast, in our paper, we investigated structured parameterizations in the FFN module and found that Toeplitz is not very suitable for mixing information in the hidden states dimension for NLP tasks. We also focus on their scaling, optimization, and efficiency aspects in modern LLM architecture training.
* **Q3**: Is there any special technique to initialize the matrices U and V? What is the scale/variance of the initialization?
**A3** : We put the discussion about initialization in Section B.1 of the submission, where we mentioned that we used spectral initialization for LowRank and orthogonal initialization for BlockDense and BlockShuffle. Specifically, we use random Gaussian initialization with a variance of 0.02, similar to GPT-2, to initialize the original weight W. Then, as suggested by [1], U and V in LowRank parameterization are initialized by using the SVD decomposition of W. For the other two methods, we apply orthogonal initialization as recommended in the paper on deep linear networks [2]. Experiments in Section B.1 on the small dataset WikiText-103 further validated our choice.
[1]. Initialization and regularization of factorized neural layers. ICLR'21
[2]. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. Andrew M. Saxe, et al. 2014.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: I thank the authors for the rebuttal. All my concerns are addressed.
For GLUE I would recommend finetuning on the mixture of all the data and evaluate on each task individually - that can be both efficient and informative. That being said, the reported results in the rebuttal look promising to me.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 7Gik
Comment: Thank you for your quick reply and suggestion. Based on your suggestion, we trained the model with a binary classification head on a mixture of the GLUE benchmark, excluding MNLI and STS-B, since they are not binary classification tasks and thus are difficult to mix with the others.
We trained the models for 6 epochs, using a batch size of 128 and a swept learning rate. From the table below, we observe that structured parameterizations exhibit performance close to that of the dense model (e.g., a 0.64 accuracy loss for BlockDense).
Meanwhile, we also noticed that the performance on very small datasets, including CoLA, MRPC, and RTE, is not very stable.
This might be fixed with further hyperparameter search with more expensive runs, because we believe that the smaller datasets tend to be more sensitive to the hyperparameters. Moveover, we didn’t consider the weight of each dataset when mixing them, which may also not be preferable for the smaller datasets.
|| Validation PPL | CoLA (matt.) | SST-2 (acc) | MRPC (acc) | QQP (acc) | QNLI (acc) | RTE (acc) | Avg. |
|--|-|-|-|-|-|-|-|-|
| Dense | 16.68 | 45.32 | 90.48 | 80.15 | 89.71 | 86.36 | 64.98 | 76.17 |
| 32% FFN Params. | | | | | | | | |
| LowRank | 18.65 | 43.01 | 90.13 | 80.88 | 89.50 | 86.09 | 63.18 | 75.47 |
| BlockDense | 18.61 | 42.16 | 90.14 | 78.67 | 89.82 | 86.34 | 66.06 | 75.53 |
| BlockShuffle | 18.94 | 44.51 | 89.79 | 79.90 | 89.36 | 85.56 | 61.73 | 75.14 | | null | null | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their constructive feedback. Before replying to the comments one by one, we would like to highlight our contributions and clarify common questions in this general response:
In this paper, we investigate the performance of three structured parameterizations within the FFN modules in modern LLM architecture training, focusing on efficiency, optimization, and scaling aspects. In detail,
* We conducted a comprehensive efficiency study in various scenarios, using our proposed pre-merge technique, to show their good latency performance.
* We identified the optimization challenges of the structured parameterizations and proposed an effective method called self-guided training to boost performance for all of them.
* We showed that these structured matrices have steeper scaling curves and can utilize training FLOPs more effectively than the dense model. To further validate this point, we trained a wide and structured network of medium size to have fewer parameters and lower perplexity.
For common concerns, we added the downstream performance in Table 2 and Table 3 in the attached PDF, showing consistent performance with validation perplexity of pre-training, and close results to the dense model. Additionally, we applied a linear fit to the scaling curves to make the illustration clearer in Figure 1 of the attached PDF.
Pdf: /pdf/dd6a01bd0e1bf25eba67f12178c5fd6740d133ea.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
PediatricsGPT: Large Language Models as Chinese Medical Assistants for Pediatric Applications | Accept (poster) | Summary: The authors of this paper built PedCorpus, a Chinese pediatric dataset, and PedistrcsGPT, the first Chinese pediatric LLM assistant. Their model was built via continuous pertaining, full-parameter supervised fine-tuning (SFT), direct following preference optimization, and parameter-efficient secondary SFT. Performance evaluation results show that their model outperforms other models.
Strengths: Originality: This is the first time that Chinese LLM has been applied to pediatrics. The author also proposed DFPO, which improves the performance of preference optimization.
Quality and Clarity: Generally, the paper clearly presents the datasets, the methodology, and the performance evaluation, including the experiment settings and results.
Significance: The model could complement the current shortage of healthcare resources in pediatrics.
Weaknesses: 1. The techniques in this paper are not that novel, as it applies LLM to another field with main techniques either not original or adapted from others (e.g., DFPO).
2. The paper focuses on the application level. I believe the model could be important if it is deployed. However, there is no evaluation from real users (patients or patients' parents). Doctor's evaluations could be different from the patients' evaluations, especially since this model will be used for children. This weakens this paper's contributions. I think the authors should at least discuss their plan of deployment or intervention as future work.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Do you think DFPO performs better only on your specific task or can generally perform better than RHLF and vanilla DPO? If it can generally perform better, then this can be a great contribution.
2. How will you deploy your model in practice to complement the current shortage of pediatric healthcare resources?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors list two limitations, which make sense. I wonder whether your model can be 100% accurate. If it generates errors or hallucinations, it should be important to find them and avoid showing them to patients and their parents. As your model will be used for medicine and children, this should be more important than the LLM applied to other fields. This again goes back to my question before: how will you deploy your model in practice?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our new preference optimization and thorough evaluations. We present our detailed responses below.
**Q1**: Clarification of technical novelty.
**A1**: As stated in lines 33-43, this paper addresses the shortcomings at the dataset and framework level of the LLM construction in the Chinese healthcare domain. We propose a high-quality dataset PedCorpus through a series of new instruction-building mechanisms and a systematic training pipeline through new strategies to improve the different challenges in multi-stage training. Our technical novelty is distributed among these mechanisms and strategies. We clarify other contributions besides DFPO mentioned by the reviewer as follows.
* As stated in lines 54-56, we propose a hybrid instruction pre-training strategy in Continuous Pre-Training (CPT) to bridge the capability weakening due to corpus format discrepancies between the internal and injected medical knowledge of foundation models, facilitating knowledge accumulation and extension.
* As stated in lines 186-199, we devise a mixture of universal-specific experts strategy to tackle the competency conflict between medical generalist and pediatric expertise in secondary SFT, which strengthens the model's adaptability to distinct downstream tasks. Specific experts master different pediatric tasks by soft routing gating control with noise. Also, a universal expert is consistently activated to prevent general knowledge forgetting and mitigate competency conflict.
* As stated in lines 106-107, we present a role-playing-driven instruction building mechanism to convert the consolidated textual data into well-designed pediatric instructions. The proposed approach endows the advanced language model with the professional role of an expert pediatrician to generate accurate instruction knowledge for the target model training.
* As stated in lines 123-124, we design a progressive instruction reconstruction mechanism to distill the sampled instructions to ensure informative pediatric model responses. Unlike the traditional self-instruct pattern of using APIs, our mechanism guides APIs to take the perspective of the experienced pediatrician to complete progressive refinement tasks in the given instruction and answer scenarios.
It is worth noting that these technologies are domain-agnostic and can be absorbed into other communities' LLM constructions to facilitate long-term developments.
***
**Q2**: About the user study and future plan.
**A2**: Constructive proposal. We offer the following two discussions.
**User Study**:With limited rebuttal time and available resources, we did our best to conduct a user study covering 50 patients (each paid $300) in Figure 1 of the **Response.pdf**. Average evaluation results from participants are reported on three benchmarks. We observe similar performance trends as in the original user study from Figure 3, implying the superiority of our model. To investigate the differences between doctor and patient evaluations, we evaluated the consistency of the results from the two user studies using Pearson Correlation Coefficients (PCC). The high PCC score of 0.89 shows that doctor evaluations can reflect patients' preferred outcomes to some extent, suggesting the reasonableness of the evaluation pattern in the main manuscript. The potential reason for this is that patient judgments about personalized medical diagnoses stem largely from interactive knowledge consultations with doctors, resulting in their behaviors being impacted by doctor preferences.
**Future Plan**: We plan to conduct randomized controlled trials with patients from more diverse pediatric medical departments to verify the effectiveness of our model in real-world applications.
We promise to add the above analyses to the revision.
***
**Q3**: Does DFPO perform better on specific or general tasks?
**A3**: In Table 3 of the manuscript, we conducted ablation studies on five benchmarks to investigate the DFPO performance. These benchmarks include three pediatric-specific datasets and two general-purpose datasets with different medical departments. We found that DFPO leads to average gains of 1.4\% and 2.7\% in GPT-4 and doctor evaluations on pediatric tasks, respectively. It can bring average gains of 1.5\% and 2.5\% in GPT-4 and doctor evaluations on general tasks, respectively. In addition, both vanilla DPO and RLHF perform less well than DFPO in the general tasks. These findings confirm that DFPO performs better overall, leading to superior human preference alignment.
***
**Q4**: How to deploy the model in practice?
**A4**: We will adopt two deployment methods.
**Offline Terminal Deployment**: We will deploy smart terminals equipped with our models in our partner healthcare institutions to provide efficient and accurate medical diagnosis and treatment services to patients.
**Online Application Deployment**: We plan to package our modeling system into an application that is easy for users to access on mobile devices. Our application will provide services such as online consultation and consultation record inquiry to reduce the pressure of offline treatment.
***
**Q5**: Regarding the accuracy of the model.
**A5**: We emphasize that no medical LLMs can have 100% accuracy in practice. To avoid errors and hallucinations, we are committed to performing the following measures in our deployments.
* We will combine transformer-based representation editing techniques to manipulate the model to generate harmless responses in the factual semantic space.
* We will add contrastive decoding strategies to reduce the probability of generating unfaithful content for the next token prediction.
* We will introduce the retrieval-augmented generation to improve the trustworthiness of medical answers.
* We plan to build LLM-based agents using external tools for verifiable factual knowledge to ensure response security with real-time interventions.
---
Rebuttal Comment 1.1:
Comment: Thanks for all your work and clarification. My concerns are addressed.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer MG71
Comment: We thank the reviewer for the meticulous advice! | Summary: - This paper introduces PediatricsGPT, a Chinese AI assistant for paediatrics
- They created a large dataset (PedCorpus) with 300k+ medical instructions
- The training process is pretty involved - includes pre-training, fine-tuning, and preference alignment
- They came up with some new techniques, like hybrid instruction pre-training
- Evaluation was thorough - used metrics, GPT-4, and even had real doctors test it
- Results show it outperforms other Chinese medical AI models in various health tasks, as well as ChatGPT
- The authors discuss its potential to support doctors and improve pediatric care
Strengths: - It's tackling a real problem - the shortage of pediatric care in China
- The dataset they built (PedCorpus) is very comprehensive and high-quality
- Their evaluation is thorough, using multiple methods including real doctors
- The model outperforms existing Chinese medical AIs, which is impressive
Weaknesses: Overall the paper would benefit from more information on dataset construction, for each specific task and in general. In many cases, it is not clear what is meant by manual sampling, automatic extraction, dataset collection and similar.
It would be great to see the standard deviation for the reported results.
It would be great to see a comparison with human experts, but this is too much to ask for now, so future work.
Technical Quality: 3
Clarity: 3
Questions for Authors: Line 12: Immediately, the... Please rephrase, a bit hard to understand
Line 29: Please either reference your own results, or cite a resource that supports your claim.
Line 101: Please add information on how was this done, how were the books selected and how was the knowledge automatically extracted.
Line 109: Given that real patient conversations were used, it would be beneficial to know from where exactly was this data obtained, and were the appropriate ethical guideliens followed.
Line 107: Manually sampled - does this mean a human went through everything and selected the appropriate data.
Line 164-168
- PedCorpus-DFPO is selectively sampled from vanilla PedCorpus - What does it mean "selectively sampled", how was this done? Were there any biases? What parts of PedCorpus were sampled and in which proportion?
- Humanistic stylistic rephrasing - what exactly is this? Who performed the rephrasing (humans or AI, or a combination)? What guidelines were used? How was the quality and consistency of the rephrasing ensured?
- Low-capability medical assistant - please add a bit more info on why was the HuaTuo model chosen, and why is it low capability (how was this verified).
Line 208: How were the 208 difficult tasks sampled, and how was difficulty determined?
The paper would benefit (especially the first couple of pages) from a review of the grammar and sentence clarity.
Note: most of the datasets are in Chinese, maybe some information was obvious, but because of my lack of understanding of the language I was unable to determine that.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: A very limited validation on real-world use cases, this is more of an initial research paper that can be built upon and further tested in the real world. Would be great to address this in the main part of the paper and not an appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our comprehensive datasets, new techniques, and thorough evaluations. We present our detailed responses below. The common questions regarding the ethical issues can be found in the **global response**.
**Q1**: Explain the dataset construction including textbook selection, automatic extraction, and manual sampling.
**A1**:
Valuable proposal! We provide detailed explanations based on three parts:
* As stated in lines 101-108, pediatric task-specific data is derived from specialized textbooks, guidelines, and knowledge graphs. In this case, specialized physicians were invited to select valuable textbooks and guidelines from the candidate books. The selection criteria consist of checking the corpus for the presence of informative medical knowledge and up-to-date healthcare content.
* Suitable natural language processing techniques are used to accomplish the automated extraction process, including named entity recognition (NER) and syntactic parsing. Specifically, we utilize the NLTK tool to extract vital medical terms (e.g., disease names, symptoms, and drug names) from the text. The Stanford Parser-based dependency parsing model is used to analyze sentence structure to find relations among valid medical terms. These techniques select knowledge-intensive passages from the streaming corpus for subsequent instruction data construction.
* As stated in lines 117-124, the general medical data is mainly from distilled medical datasets. In this case, manual sampling means that we first thoroughly check the completeness of the instructions by considering multiple dimensions, including whether the instructions contain meaningful symptoms and treatments. Instructions are removed if they miss critical information. Next, we employ a BERT-based semantic similarity model to check the instructions information density and select informative parts.
The above procedures help us select appropriate data to construct high-quality PedCorpus. We promise to incorporate the above details in the revision to provide comprehensive insights.
***
**Q2**: Providing the standard deviation of the reported results.
**A2**: Table 4 of the **Response.pdf** provides the mean ± standard deviation of PediatricsGPT results on different benchmarks. We find that the deviations of our model under different metrics across three tasks are within the range of ± 0.8, with most of the deviations of the 13B results in the much smaller interval of ± 0.5. These observations confirm the consistency and effectiveness of the experimental results. We promise to add the above analysis to the revision.
***
**Q3**: Comparison with human experts.
**A3**: We fully agree with the reviewer's proposal because incorporating results from experts helps provide comprehensive insights. To this end, we made the best effort to hire ten pediatricians ($300 each) in limited rebuttals to provide human responses corresponding to test queries on different benchmarks. To better evaluate using ground truth-based metrics, we declared evaluation protocols and required responses to be as well-presented and logical as possible. Table 4 of the **Response.pdf** shows the average results from the human experts. We found that experts performed best on the EviDiag, which requires multi-round consultations. This makes sense since pediatricians have more experience with interactive diagnosis in their practice than the models. The superior performance of our model on knowledge Q&A and treatment recommendation confirms its potential to serve pediatric services.
We are committed to adding expert results to the revision.
***
**Q4**: Description of line 12.
**A4**: Constructive suggestion. We will rewrite the original representation in the revision as "After that, we utilize the full-parameter Supervised Fine-Tuning (SFT) to incorporate the general medical knowledge schema into the models." for intuitive understanding.
***
**Q5**: Reference to line 29.
**A5**: Thanks for the reminder. We promise to reference our own results in the revision to support the stated claim.
***
**Q6**: Related issues of PedCorpus-DFPO.
**A6**:
* To ensure that the DFPO phase aligns user preferences in different instruction tasks and genres, selective sampling refers to sampling 3889, 3889, 3889, and 3889 instructions from pediatric data, real conversations, Huatuo-26M, and MedDialog, respectively. We extract the original instruction features in these four parts by BERT. Then, K-means clustering is used to identify the instruction cluster features through the optimal 3889 clusters. We select the closest instruction in each cluster as the sampled instruction to avoid data bias.
* Humanistic stylistic rephrasing refers to making the responses as caring and well-presented as physicians. The process is performed by specialized physicians to ensure quality.
* Huatuo (i.e., BenTsao) is chosen to generate low responses because: its performance was the worst on multiple medical tasks in the latest previous study [1]; and we find in practice that its responses were often hallucinatory and inaccurate.
***
**Q7**: Difficult task sampling of line 208.
**A7**: We invited an expert physician from each of the six medical departments to manually sample 50 difficult instances. Difficult tasks were defined by considering two dimensions, including whether they contained rare diseases and whether they required complex logical reasoning.
***
**Q8**: Review of the first couple of pages.
**A8**: Constructive advice. We have reviewed the grammar and sentences of the first couple of pages and promise to fix them in the revision.
***
**Q9**: About real-world use cases.
**A9**: We promise to add more real-world use cases to the main part of the extra page after acceptance.
[1] Yang, S., Zhao, H., Zhu, S. Zhongjing: Enhancing the Chinese medical capabilities of large language model through expert feedback. In AAAI (Vol. 38, No. 17, pp. 19368-19376).
---
Rebuttal 2:
Comment: Dear Reviewer eEmB:
We would like to thank the reviewer for taking the time to review our paper and for the comments.
Please kindly let us know if anything is unclear. We truly appreciate this opportunity to clarify our work and shall be most grateful for any feedback you could give to us.
Best regards,
Authors | Summary: This paper tried to build an AI-powered pediatric consultation system. Their motivation is current Chinese conversation LLMs for healthcare underperform in pediatric applications due to insufficient instruction data and bad training procedures. To tackle this problem, the authors proposed PedCorpus, a high-quality dataset with over 300K multi-task instructions, and PediatricsGPT, the first pediatric LLM assistant in Chinese. The PediatricsGPT baseline models, enhanced with a hybrid instruction pre-training mechanism, supervised fine-tuning, preference optimization, and a strategy to balance general and pediatric expertise, consistently outperforms previous Chinese conversation LLMs.
Strengths: - The authors are inspired by the new progress in LLMs and then build new datasets for Pediatrics application and new domain-specific LLM model weights for this task.
- The paper's structure is clear and the authors propose a rigorous evaluation pipeline.
Weaknesses: - "Direct Following Preference Optimization (DFPO) in human intention alignment is devised to enhance response robustness and align human preferences." - where is the evidence for this? It would be better to visualize some cases other than using ablation study as we can not see a huge performance difference.
- "ternary instances in the knowledge graphs" How to build the ternary instance for your task.
- "complementary resources" What is a complementary resource?
- Not sure how to protect patient privacy in your Real Doctor-patient Conversations subset.
- As a pediatrician, I don't think the current model has "competence in different pediatric and general healthcare service scenarios". I review your data in the supplementary, it seems most of data do not contain real pediatrics AI challenge. To prove the generalization of the proposed model or prove the value of the dataset in Pediatrics, you should consider building datasets with more specialized pediatric diseases such as autism, cerebral palsy...
- It is also important to test the performance of proposed model in other Out-of-Domain dataset, such as other medical QA dataset for adult.
- It is confusing in your dataset building process. Why choose “expert pediatrician” GPT-4 but not real pediatrician to help you construct the dataset?
- "doctor-like and patient-friendly" - how to define doctor-like and patient-friendly. Do you mean 'professional' here?
- The scale of user study is limited. You state "We invite three doctors (each paid $300) to determine the winner of pairwise models by the majority voting rule." However, it would be better to conduct a wide user study from pediatricians as three people can not represent the human preference of the whole group.
- Institutional Review Board (IRB) Approval is needed as this paper has human user study and the released data includes real doctor-patient conversations.
- Some minor issues:
"Chinese medicine" can be replaced by healthcare service in Chinese as its clinical meaning can be found here: https://www.hopkinsmedicine.org/health/wellness-and-prevention/chinese-medicine
Technical Quality: 3
Clarity: 2
Questions for Authors: You can check the Weaknesses part and try to answer the question proposed in each item.
As a physician, I agree this paper has enough novelty and can serve as a useful foundation model in the future. Thus I tend to give Borderline to this paper before reading the rebuttal. However, there are still plenty of issues that need the author to solve them.
During the rebuttal, the author needs to solve the lacking issue of IRB Approval first. If IRB approval is not attached, the dataset and model may contain ethical issues and can not be released.
I am also curious how ChatGPT achieves good medical and pediatric QA performance (only slightly lower than the proposed PediatricsGPT model) with in-context learning.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The main limitation is lacking IRB Approval. And most of the data do not contain real pediatrics AI challenges. This dataset can only be considered as a supplement of current released medical QA data for adults in China such as DISC-MedLLM.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects', 'Ethics review needed: Data privacy, copyright, and consent', 'Ethics review needed: Data quality and representativeness']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Insightful comments! Below are some specific responses. The common questions regarding the ethical issues can be found in the **global response**.
**Q1**: Evidence on the effectiveness of DFPO.
**A1**: We provide evidence both quantitatively and qualitatively.
* Since the degree of preference alignment is difficult to measure with traditional metrics, we evaluated it employing the GPT-4 and Doctor by considering different response dimensions. From Table 3, when DFPO is removed, the model exhibits an average 2.8\% performance degradation across five different benchmarks in Doctor evaluations, specifically a 4\% drop on the webMedQA. These observations prove that DFPO can provide a user-friendly response style that aligns with human preference in expert studies.
* We provide a comparison of the model response with and without DFPO for the same query. We show the English-translated version of the Chinese response in Figure 2 of the **Response.pdf**. The response content is partially omitted due to space constraints. We observe that DFPO helps the model to correct the “no specific treatment” hallucination and provide informative content, leading to better robustness. Moreover, the model response after DFPO is more caring and humanistic, in line with human preferences. We promise to add the qualitative analysis to the revision.
***
**Q2**: How to build ternary instances?
**A2**: The sequential steps are as follows:
* Structured knowledge is extracted from the consolidated textual database using natural language processing methods. Specifically, we used named entity recognition and relationship extraction techniques to identify medical entities (e.g., diseases, symptoms, and medications) and their relationships from the text.
* Then, we assemble ternary instances in the knowledge graph, which contains three parts: subject, predicate, and object.
* We ensure the accuracy of ternary instances through expert validation and manual scrutiny to provide a solid foundation of medical knowledge for our task.
***
**Q3**: What is a complementary resource?
**A3**: Since corpora from textbooks, guidelines, and knowledge graphs have different characteristics, they provide complementary resources for pediatric data extraction. Specifically, textbooks contain authoritative disease content from basic to advanced levels, providing systematic knowledge structures. Guidelines focus on hands-on and diagnostic applications, providing knowledge of specific treatment recommendations. Knowledge graphs contain hierarchical correlation semantics between diseases and symptoms, providing evidence-based diagnostic knowledge.
***
**Q4**: Regarding the model generalization and the dataset value in pediatrics.
**A4**: We want to clarify two points.
* As shown in Figures 4\&5, we performed extensive comparisons with other baselines on the adult general medical benchmarks CMD and webMedQA, which contain QA data from different departments. These benchmarks belong to the OOD test data since they are not included in the training set. From lines 257-264, our model outperforms previous LLMs in most departments, proving robust generalizability.
* From line 103, PedCorpus incorporates 131 pediatric diseases from 11 broad categories from textbooks to provide specialized pediatric knowledge. From Table 1, PedCorpus also includes data from multiple medical departments to supplement general medical knowledge. In this case, the cases in Supplementary are randomly sampled from the mixed data only to show the instruction construction process. We will open-source pediatric and general medical data separately to provide different values to subsequent studies.
***
**Q5**: Reasons for choosing GPT-4 construct data.
**A5**: We want to clarify that hiring extensive pediatricians working to build large-scale datasets from scratch is labor-intensive and cost-expensive. It is a mainstream and effective strategy to use the advanced GPT-4 that can match human preferences well in a role-playing manner. However, we employed several pediatricians to participate in the instruction checking to ensure the professionalism of the datasets.
***
**Q6**: About doctor-like and patient-friendly.
**A6**: "Doctor-like" means that the response contains professional content like the physician's. "Patient-friendly" means that the response is well-presented, logical, and informative, making it easy for the user to understand.
***
**Q7**: About the user study.
**A7**: With limited rebuttal time, we did our best to conduct a user study covering 50 participants (each paid $300) in Figure 1 of the **Response.pdf**. Average evaluation results from participants are reported on three benchmarks. We observe similar performance trends as in the original user study from Figure 3, implying the superiority of our model. Furthermore, we evaluated the consistency of the results from the two user studies using Pearson Correlation Coefficients (PCC). The PCC score of 0.89 indicates the effectiveness of the small-scale user study. We promise to optimize the related analysis in the revision.
***
**Q8**: About "Chinese medicine".
**A8**: We thank the reviewer for the constructive proposal. We promise to fix it in the revision to avoid the meaning confusion.
***
**Q9**: In-context learning (ICL) for ChatGPT.
**A9**: Insightful comment. We designed a semantics-driven strategy to select suitable examples to guide ChatGPT in improving performance by ICL. During testing, we randomly sample 100 samples from the training instructions to compute the cosine similarity of feature semantics with the test sample. The features are extracted by the BERT model. The training samples with similarity scores ranked top-5 were then used as ICL examples to guide ChatGPT in performing generation. Table 3 in the **Response.pdf** shows the potential of ICL to improve the ChatGPT. However, our model still outperforms ChatGPT on most metrics due to healthcare-specific systematic training.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer eZYU:
We would like to thank the reviewer for taking the time to review our paper and for the comments.
Please kindly let us know if anything is unclear. We truly appreciate this opportunity to clarify our work and shall be most grateful for any feedback you could give to us.
Best regards,
Authors
---
Rebuttal Comment 1.2:
Title: Thanks for your rebuttal and I will keep my score.
Comment: Dear Author of Paper 385,
Thank you for your rebuttal and the additional clarifications provided. After reviewing your comments, the rebuttal document, and revisiting the appendix of your original submission, I still have several concerns regarding your paper.
The main issue is not with the model architecture itself. I commend you for thoroughly testing various baseline models on your dataset and proposing a new training pipeline. However, it’s important to note that similar solutions have been proposed for general-domain LLMs. Given that your model and data are positioned as the first Chinese pediatric LLM assistant, the primary contribution of this paper should be the quality of the dataset.
Your paper makes a strong assertion that LLMs can be effectively used to answer pediatric questions and generate humanistic responses. However, the conversational responses in the dataset provided do not seem to fully support this claim, as they often appear to be direct extracts from textbooks rather than original, contextually adapted answers. Furthermore, it appears that your model, including the dataset, heavily relies on prior work such as DISC-MedLLM. While the concept of developing an LLM specifically for pediatrics is compelling, the current implementation may not adequately address real-world challenges.
I suggest consulting with researchers from different pediatric departments before proceeding further with this dataset to ensure that it meets the practical needs of the field.
Additionally, we require more time to discuss the ethical implications of your paper with the ethics reviewers once they submit their evaluations. Therefore, I will be maintaining my current score at this time. However, I will make every effort to reach a consensus with the other reviewers in the ML domain, and it is possible that the final score may be adjusted.
Best regards,
Reviewer eZYU
---
Rebuttal 2:
Comment: Dear Reviewer eZYU:
We would like to thank the reviewer for taking the time to review our paper and for the comments.
Please kindly let us know if anything is unclear. We truly appreciate this opportunity to clarify our work and shall be most grateful for any feedback you could give to us.
Best regards,
Authors
---
Rebuttal 3:
Comment: Dear Reviewer eZYU:
As the discussion period is closing, we sincerely look forward to your feedback. We deeply appreciate your valuable time and efforts. It would be very much appreciated if you could once again help review our responses and let us know if these address or partially address your concerns and if our explanations are heading in the right direction. Please also let us know if there are further questions or comments about this paper. We strive to improve the paper consistently, and it is our pleasure to have your feedback!
Best regards,
Authors
---
Rebuttal 4:
Comment: Dear Reviewer eZYU:
As the discussion period is closing, we sincerely look forward to your feedback. We deeply appreciate your valuable time and efforts. It would be very much appreciated if you could once again help review our responses and let us know if these address or partially address your concerns and if our explanations are heading in the right direction. Please also let us know if there are further questions or comments about this paper. We strive to improve the paper consistently, and it is our pleasure to have your feedback!
Best regards,
Authors
---
Rebuttal 5:
Title: Thanks for the comment and we will make further clarifications about humanistic responses.
Comment: **Q3**:Whether the conversational responses in the dataset support the assertion that the model produces humanistic responses.
**A3**: We would like to clarify that Appendix A.1 and Figures 7\&8 show how valid instruction pairs can be extracted from streamed text from specialized pediatric data. At this stage, we expect the role-play driven API to build instructions with as much guaranteed knowledge completeness and accuracy as possible, rather than contextually adapted answers. Adapted answers will largely suffer from the API's own hallucination problem, leading to potentially noisy and non-factual information. After this stage, we feed the built initial instructions back to the pediatricians to help us with humanistic stylistic rephrasing, referring to making the responses as caring and well-presented as the pediatricians’.
We promise to supplement that the humanistic style comes from the physicians' rewrites after rather than extracting the preliminary instructions in the revision.
---
Rebuttal 6:
Title: Thanks for the comment and we will make further clarifications about similar solutions.
Comment: Dear Reviewer eZYU:
We deeply appreciate the reviewer's comments. Based on your response and original comments, our rebuttal appears to have addressed the vast majority of your concerns. We would like to clarify the remaining concerns as follows.
**Q1**: The reviewer recognizes and appreciates the thorough evaluation and novel training pipeline. However, the reviewer affirms that there are similar solutions in the generalized domain.
**A1**:
We are concerns that we do not fully understand your insights into similar solutions. It would be helpful if the reviewer could present specific examples for reference that would help explore the interpretation and provide evidence for this statement. Here, we go through the content to help the reviewer address the concern.
We start by listing some of the other reviewers' comments to help you come to a consensus with them, as you hope to do in your response. As described by the **reviewer oou4**, our innovative training process, which includes continuous pre-training, full-parameter supervised fine-tuning, preference optimization, and mixture of universal-specific experts-based supervised fine-tuning, effectively adapts the model to the pediatric domain. The **reviewer jQHJ** explicitly noted the multi-stage training process of this work helped enhance the model's ability to produce good responses and adapt to the complexity of pediatric consultations. In addition, The **reviewers eEmB** and **MG71** pointed out multiple domain-specific innovations in our systematic training pipeline in the strengths, including the new preference optimization strategy DFPO and the hybrid instruction pre-training mechanism. Of course, these new contributions we have also accomplished explicit clarifications in the manuscript. Specifically,
* As stated in lines 131-138, we propose a hybrid instruction pre-training mechanism in Continuous Pre-Training (CPT) to bridge the capability weakening due to corpus format discrepancies between the internal and injected pediatric knowledge of foundation models, facilitating knowledge accumulation and extension. Our mechanism better facilitates the adaptation of model capabilities in pediatric healthcare.
* As stated in lines 169-176, we present a stable method for domain-specific LLMs called Direct Following Preference Optimization (DFPO). DFPO utilizes variable changes to formulate the preference loss as a policy function that efficiently optimizes the policy with a simple binary cross-entropy objective. Meanwhile, our method directly regularizes model behavior boundaries in an instruction-following paradigm on medical demonstrations of preferred responses, facilitating robustness and smoothing of the preference learning.
* As stated in lines 186-199, we devise a mixture of universal-specific experts strategy to tackle the competency conflict between medical generalist and pediatric expertise in secondary SFT, which strengthens the model's adaptability to distinct downstream tasks. Specific experts master different pediatric tasks by soft routing gating control with noise. Also, a universal expert is consistently activated to prevent general knowledge forgetting and mitigate competency conflict. Our strategy enables our models to perform more comprehensively and competitively in pediatric medical applications.
We believe that the above contributions can significantly differentiate and strengthen the necessity of our work compared to techniques in the generalized domain.
---
Rebuttal 7:
Title: Thanks for the comment and we will make further clarifications about the primary contribution.
Comment: **Q2**: Given that your model and data are positioned as the first Chinese pediatric LLM assistant, the primary contribution of this paper should be the quality of the dataset.
**A2**: As stated in lines 33-43, this paper addresses the shortcomings at the dataset and framework level of the LLM construction in the Chinese healthcare and pediatric domains. We propose a high-quality dataset PedCorpus through a series of new instruction-building mechanisms and a systematic training pipeline through new strategies to improve the different challenges in multi-stage training. Our technical novelty is distributed among these mechanisms and strategies.
For the dataset dimension, existing instruction data typically involve vanilla rephrasing of the general medical corpus or aggregation of doctor-like dialogues, which loses the specialization and focus in pediatric applications. More importantly, the current straightforward different round instruction construction paradigms fail to accommodate multi-task healthcare services in real-world scenarios, limiting the model generalization.
In comparison, the novelties of our PedCorpus derive from three characteristics.
* Task Diversity: besides containing generalist healthcare data, three application-oriented tasks are considered, including medical question-answer, evidence-based diagnosis, and treatment recommendation.
* Source Richness: distinct pediatric textbooks, guidelines, and knowledge graph resources provide solid assurance of medical knowledge's accuracy.
* Instruction Extensibility: vanilla instructions can be readily extended to seed instructions for generating specialized corpora to serve different training phases.
For the technique dimension, prior methods relied on SFT to compensate for medical instruction following capabilities, ignoring the discrepancies between inherent and externally absorbed knowledge within the models. This single pattern causes secondary LLMs to lapse into excessive role-playing rather than understanding. Despite a few attempts in the reinforcement learning from human feedback phases, their performance is restricted by actor-critic instability and online sampling bias.
In addition to the new techniques in the training process mentioned in **A1**, we propose three effective methods in ensuring the quality of the dataset. Specifically,
* As stated in lines 106-107, we present a role-playing-driven instruction building mechanism to convert the consolidated textual data into well-designed pediatric instructions. The proposed approach endows the advanced language model with the professional role of an expert pediatrician to generate accurate instruction knowledge for the target model training.
* As stated in lines 113-155, We introduce a in-context learning paradigm to regularize and refine vanilla conversations through advanced language models with the self-instruct pattern。
* As stated in lines 123-124, we design a progressive instruction reconstruction mechanism to distill the sampled instructions to ensure informative pediatric model responses. Unlike the traditional self-instruct pattern of using APIs, our mechanism guides APIs to take the perspective of the experienced pediatrician to complete progressive refinement tasks in the given instruction and answer scenarios.
Based on the above analysis and explanations, our main contributions are distributed among dataset construction and framework structure, and the different mechanisms and strategies in these two aspects help our work perform better on task-specific pediatric domains and on general purpose medical applications.
---
Rebuttal 8:
Title: Thanks for the comment and we will make further clarifications about the model and dataset.
Comment: **Q4**: The model and dataset rely on previous work such as DISC-MedLLM [1].
**A4**: We clarify the differences between our approach and previous work in terms of both the model and the dataset.
**Model Level**: Current Chinese medical LLMs are constructed based on open-source foundation models for secondary development. In this case, we design a systematic training pipeline that incorporates the new direct following preference optimization (DFPO) and the mixture of universal-specific experts (MUE) structures, both of which are absent from previous Chinese healthcare models. Our DFPO facilitates the control of model behavioral boundaries, mitigating potentially harmful and unfaithful outputs. Our MUE structure effectively resolves conflicts between different pediatric tasks and the general medical knowledge, facilitating better knowledge learning and instruction following. In comparison, **DISC-MedLLM [1] does not have extra innovations in model structure and simply uses Baichuan-13B-Base to execute SFT procedures to develop medical competencies**.
**Dataset level**: We elaborate on our PedCorpus and DISC-Med-SFT from DISC-MedLLM to emphasize that the two are completely different.
As described in Section 3.1\&Table 1 of the manuscript, and the **global response**, the proposed PedCorpus consists of three parts, including pediatric data, real doctor-patient conversations, and distilled medical datasets.
* The pediatric data comes from instruction pairs that we extracted from scratch from specialized textbooks, guidelines, and knowledge graphs, which are key to building models about task-specific pediatric applications.Among these, the knowledge graphs were constructed from scratch by ourselves from the pediatric domain. In contrast, DISC-Med-SFT uses a publicly available knowledge graph, CMeKG, which focuses on general medical knowledge and differs from our data sources and modeling philosophy.
* Our doctor-patient conversation data come from two components: voice transcriptions collected at our partnering healthcare institution and the publicly accessible dataset cMedQA2. For the former, we have clarified the relevant ethical issues in the global response and given privacy protection and implementation schemes that match the NeurIPS guidelines. For the latter, we have regularized the vanilla doctor response using a context-learning strategy, which leads to more organized and fluent responses compared to the vanilla instructions. In contrast, DISC-Med-SFT uses the publicly available MedDialog and cMedQA2, which adapts the entire dialogue through prompt engineering. We would like to emphasize that incorporating cMedQA2 in the construction of medical LLMs is a public practice rather than a particular work-specific design. However, the different paradigms for processing vanilla instructions by the different works resulted in completely different final SFT data. These observations and measures imply that doctor-patient conversation data in PedCorpus and DISC-Med-SFT are significantly different.
* For other datasets, we found numerous unclear and incomplete representations in the instruction instances from public benchmarks due to the absence of careful calibration, potentially triggering hallucinated outputs. To this end, we sampled a small set of knowledge-intensive instructions following the principle of quality over quantity. We introduced a progressive instruction reconstruction rule to distill the input and output parts of instruction pairs separately, resulting in completely different data compared to vanilla instructions. In contrast, DISC-Med-SFT primarily uses publicly available data from MedMCQA, moss-sft-003, and Alpaca-GPT4.
Depending on the citation format we provide, ACs and reviewers can readily access DISC-MedLLM to explore our statements and clarifications.
[1] Bao, Z., Chen, W., Xiao, S., Ren, K., Wu, J., Zhong, C., ... & Wei, Z. (2023). Disc-medllm: Bridging general large language models and real-world medical consultation. arXiv preprint arXiv:2308.14346.
---
Rebuttal Comment 8.1:
Title: About DISC-MedLLM
Comment: Dear Author of Paper 385,
I believe there may be some misunderstanding regarding the reason I referenced DISC-MedLLM in my earlier comments. My intention was to highlight that while there are several datasets for medical LLMs with similar formats to yours, the key contribution, as you have stated, should be the quality of the dataset, particularly in the specialized field of Pediatrics.
Upon reviewing the data examples provided in both your paper’s appendix and the rebuttal document, I’m concerned that the dataset may not align with what pediatric practitioners would find most valuable for their domain. This is my areas of expertise. DISC-MedLLM encounters similar issues, which is why there’s resistance from healthcare professionals toward over-generalized declarations made by these models.
For instance, the data point in your appendix:
"My child fell while playing a few days ago and complained of severe pain in the lower back. After visiting the hospital, the diagnosis was confirmed as lumbar disc herniation."
The model's response:
"Lumbar disc herniation is rare in children, but treatment still requires caution. For pediatric lumbar disc herniation, pharmacological treatment mainly includes: 1. Nonsteroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen or acetaminophen, can relieve pain and reduce inflammation; ... "
Similarly, in the rebuttal pdf:
"I'm glad to help! Pediatric cerebral palsy is a condition caused by a non-progressive brain injury that affects a child's movement and posture. The goal of treatment is to maximize the child's physical functioning and quality of life. Treatment is usually a multidisciplinary approach that includes the following: 1. Physical therapy: Specific exercises and activities are used to build muscle strength and flexibility and to help the child learn correct posture. 2. Medication: Childhood doses of medications are used to help control symptoms, including anti-spasticity medications such as baclofen, dantrolene, and botulinum toxin injections...... 3. Surgery: Selective posterior spinal rhizotomy or muscle tendon lengthening surgery is required for some children with severe muscle spasticity...... Each child with cerebral palsy is different, and the treatment plan should be individualized and tailored to the child's specific needs and abilities."
These responses resemble textbook excerpts or clinical notes, which may not be practical for real-world pediatric practice. This could lead to a significant disconnect between the machine learning and pediatrics communities, something we should aim to avoid.
To address this gap, I strongly recommend consulting with pediatric specialists (your clinical collaborators) to review your dataset. Their insights would be invaluable in ensuring that the data reflects the requirements of real-world clinical practice, rather than just verifying the accuracy of the model's reply.
Best,
Reviewer eZYU
---
Rebuttal 9:
Title: Thanks for the comment and we will make further clarifications about the practical needs.
Comment: **Q5**: Different pediatric departments were consulted to meet the practical needs of the researcher.
**A5**: We agree with the reviewer's suggestion. Considering the multifaceted tasks and diagnostic needs in real medical applications, we believe that there is no method currently available that ensures complete generalization to address the challenges in all applications. However, we have the ability to do a good capacity building of models for practical needs on several specific tasks.
With the suggestions and needs of collaborating healthcare institutions, our work focuses on three applications in pediatric healthcare, including knowledge question-answer, evidence-based diagnosis, and treatment recommendation. Correspondingly, we constructed novel MUE-based SFT processes through task-specific data for learning and mastering relevant knowledge. Also, we incorporates 131 pediatric diseases from 11 broad categories from textbooks to provide the pediatric knowledge needed for these tasks.
From Table 2 and Figures 2\&3, the experiments on the three task-specific testing benchmarks clearly demonstrate the necessity of the model in solving these pediatric tasks. More importantly, the doctor evaluations in Figure 3 also reflect the users' favor and approval of the model responses. Further, we provide more participants in Figure 1 of the **global response** to demonstrate the model effectiveness on more user studies. In addition, we also test the model's generalizability on publicly available healthcare benchmarks webMedQA and CMD with multiple departments, and provide corresponding insights and analyses.
As an exploratory research study, we believe that the current contributions and work are noteworthy and valuable. We are committed to collaborating with more pediatric departments and meeting further needs and tasks in our future plan.
---
Rebuttal 10:
Comment: Dear Reviewer eZYU:
We appreciate the time and effort you have put into reviewing our work. We value the hard-earned time for discussion to address your concerns and make necessary clarifications. We are ready to discuss at any time.
Best regards,
Authors
---
Rebuttal Comment 10.1:
Title: And about the Ethic reviews
Comment: I think it is also important to discuss related information about the IRB, dataset released timeline with the ethic reviewers, because the dataset is the biggest contribution of your paper. Can you provide them related documents? And we also need the ACs to encourage the ethic reviewers paticipating the discussion.
Best,
Reviewer eZYU | Summary: This paper introduces a Chinese pediatric LLM assistant, PediatricsGPT. It follows a standard pretraining and SFT pipeline to incorporate the general medical knowledge schema into the models. Specifically, they optimize the response to enhance the generation of pediatrician-like humanistic responses.
Strengths: 1. The establishment of a large-scale, high-quality multi-task medical instruction dataset, PedCorpus.
2. The development of a Chinese pediatric LLM assistant with pediatric expertise and medical generalist, PedatricsGPT.
3. Open source resources for the community.
4. Extensive experiments on both domain-specific capability and general medical capability.
5. Human (expert) study for a comprehensive evaluation
Weaknesses: 1. The authors claim that they crafted 100 high-quality examples to guide the advanced language model, using in-context learning to regularize vanilla conversations in the self-instruct pattern, ensuring doctor-like and patient-friendly model responses. I would like to see a rigorous evaluation of this.
2. According to Table 3, MUE, Universal Expert and RLHF seems to result in a limited performance gain.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. When designing such an LLM for pediatrics, is there any specific design/technique regarding such a target group, compared with designing a general medical LLM? I understand the preference optimization for the generation of pediatrician-like humanistic responses is one of such designs; however, is there any other specific design in the training/SFT/knowledge integration phase?
2. In Figures 2&3, how consistent is it for each question in GPT and doctor evaluation?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: See weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our large-scale dataset, open-source resources, and extensive experiments. We present our detailed responses below.
**Q1**: About the evaluation of model responses.
**A1**: Insightful comments! To rigorously evaluate whether the in-context learning (ICL) strategy can ensure that the model produces doctor-like and patient-friendly responses, we did our best to invite 100 patients from different departments at our partnering medical institutions to perform the manual evaluation. Each participant was compensated $300, which was greater than the lowest hourly rate in their region. Manual evaluation is reasonable since the response style measure is not easily quantified by metrics but can be readily evaluated by the users. Constrained by rebuttal time, we chose the 7B size to train a candidate version, called CandidateGPT-7B, without using the ICL strategy to regularize vanilla conversations. Other training procedures and settings were kept consistent for a fair comparison. In this case, we asked participants to consider two dimensions to determine the winning response between PediatricsGPT-7B and CandidateGPT-7B, targeting the same instructions on five benchmarks, including doctor-like professionalism and user-friendliness.
Table 2 in the **Response.pdf** shows the average win/tie/loss rates of PediatricsGPT-7B on different benchmarks across distinct evaluators. We found that PediatricsGPT-7B generated more patient-user-friendly response content with higher win rates on pediatric and general-purpose medical tasks. In addition, the lower tie rates on webMedQA and CMD suggest that the doctor-like and patient-friendly response style of our model can be effectively generalized to single/multi-round consultation scenarios.
***
**Q2**: Performance gains on MUE, universal expert, and RLHF in Table 3.
**A2**: We clarify the gains of different components separately and provide analyses.
* As stated in lines 193-199, in the MUE strategy, specific experts are adaptively activated to resolve knowledge competition across different pediatric tasks. The consistently activated universal expert aims to acquire general medical knowledge to prevent competency conflict between medical generalists and pediatric expertise mastery. When removing MUE in Table 3, we observe drops of 1.5% and 2% in the average results of GPT-4 and Doctor evaluations on the pediatric benchmarks EviDiag and TreRecom, proving that MUE delivers significant gains. The limited gains on MedKQ&A imply that vanilla single LoRA only masters knowledge question-answer semantics, further reflecting the effectiveness of MUE.
* By observing the setting of removing the universal expert in Table 3, we find that the universal expert brings significant performance gains of 2% and 2.5% on average on the general medical benchmarks CMD and webMedQA under the GPT-4 and Doctor evaluations, respectively. In contrast, the expert's limited gains on the three specific pediatric tasks are justified since it is not its duty to acquire task-specific knowledge.
* As stated in lines 169-176, we propose the Direct Following Preference Optimization (DFPO) to solve the unstable reward modeling and high computational costs in RLHF. In this case, the experiments on RLHF in Table 3 belong to the ablation studies that verify the proposed preference optimization approach. The limited gains of RLHF precisely show the merits and necessity of our DFPO in human preference alignment.
***
**Q3**: Other designs/techniques for pediatric LLMs.
**A3**: In addition to the DFPO technique for generating pediatrician-like responses, our study has four other novel designs aimed at the target group.
* As stated in lines 131-138, we propose a hybrid instruction pre-training mechanism in Continuous Pre-Training (CPT) to bridge the capability weakening due to corpus format discrepancies between the internal and injected pediatric knowledge of foundation models, facilitating knowledge accumulation and extension. Our mechanism better facilitates the adaptation of model capabilities in pediatric healthcare.
* As stated in lines 186-199, we devise a mixture of universal-specific experts strategy to tackle the competency conflict between medical generalist and pediatric expertise in secondary SFT, which strengthens the model's adaptability to distinct downstream tasks. Specific experts master different pediatric tasks by soft routing gating control with noise. Also, a universal expert is consistently activated to prevent general knowledge forgetting and mitigate competency conflict. Our strategy enables our models to perform more comprehensively and competitively in pediatric medical applications.
* As stated in lines 106-107, we present a role-playing-driven instruction building rule to convert the consolidated textual data into well-designed pediatric instructions. The proposed approach endows the advanced language model with the professional role of an expert pediatrician to generate accurate instruction knowledge for the target model training.
* As stated in lines 123-124, we design a progressive instruction reconstruction rule to distill the sampled instructions to ensure informative pediatric model responses. Unlike the traditional self-instruct pattern of using APIs, our rule guides APIs to take the perspective of the experienced pediatrician to complete progressive refinement tasks in the given instruction and answer scenarios.
***
**Q4**: Consistency of GPT-4 and Doctor Evaluations.
**A4**: We measure the consistency of the results of two evaluations by Bland-Altman Analysis (BAA) and Pearson Correlation Coefficients (PCC). The mean difference value in BAA is 0.21, and most measurement pairs are within the limits of agreement [-0.78, 1.18], suggesting high agreement between the two evaluations. In addition, the PCC score is 0.93, implying high evaluation consistency. We promise to add the above analyses to the revision.
---
Rebuttal Comment 1.1:
Title: Acknowledgement
Comment: Thank you for your response. Most of my concerns have been addressed and I will keep my score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer DSSd
Comment: Many thanks to the reviewer for the overall constructive and insightful comments! | Rebuttal 1:
Rebuttal: We thank all reviewers for their time and effort. Here we clarify common questions about ethical issues and open source details of the datasets.
**Q1**: Ethical issues with datasets.
**A1**: As stated in lines 101-124, the proposed PedCorpus consists of three parts, including pediatric data, real doctor-patient conversations, and distilled medical datasets.
* Among these, pediatric data contains only pure medical knowledge from textbooks, guidelines, and knowledge graphs without any private and sensitive information. These resources follow the CC BY 4.0 protocol, which allows users to freely adapt and convert the original data. The creators of these resources are reasonably attributed as co-authors of this study.
* The data source for conversations comes from two components: voice transcriptions collected at our partnering healthcare institution and the publicly accessible dataset cMedQA2 [1]. For the former, we have obtained the approval of the institutional ethics committee, which is a **prerequisite for our access to conversation data**. All data are rigorously anonymized and desensitized to avoid ethical issues. Note that we underwent the internal ethical review by the corresponding Chinese healthcare institution, which is fully compliant with the ethical guidelines of NeurIPS 2024. The corresponding guidelines are described below: **"In cases when no formal process exists, they can undergo an equivalent informal process (e.g., via their peers or an internal ethics review."** Considering the double-blind review policy, we do not have the right to submit the original approval during the review period. If NeurIPS organizers provide special access, we can submit the approval immediately for review. For the cMedQA2, the publisher collected real conversations from online treatment platforms and anonymized personal information. This dataset is licensed under the GPL-3.0 license, which allows users to modify, distribute, and make private use of it.
* In the distilled medical datasets, the protocols for Huatuo-26M, MedDialog, and CMeKG are Apache-2.0, MIT-license, and MIT-license, respectively. These protocols allow users to manipulate the data without restriction, including, but not limited to, the right to use, copy, modify, merge, and distribute. These datasets have been made to protect against privacy leakage. In this case, our instruction reconstruction rule is only used to enhance the density of medical knowledge in the vanilla data.
To summarize, all involved data undergoes rigorous review and processing to comply with ethical guidelines. Our commitment to data privacy and security is unwavering. We promise to refine the usage agreements for the data sources in the revision to provide comprehensive details.
[1] Zhang, Sheng, et al. "Multi-scale attentive interaction networks for chinese medical question answer selection." IEEE Access 6 (2018): 74061-74071.
***
**Q2**: About open source details.
**A2**: As stated in line 21, our database and models will be open source following the MIT license to promote the development of the community. In this case, we promise to release datasets containing training data for model capacity construction and evaluation benchmarks from different tasks and medical departments to avoid researcher concerns about subsequent studies and verifications.
***
**Q3**: Rejection of data leakage in LLMs.
**A3**: This study followed three guidelines to avoid data leakage concerns inherent in LLMs.
* We carefully verified the pre-training details and data usage of the foundation models to exclude any potential leakage of assessment data in the initial model training.
* As indicated in line 206, the data in the designed pediatric medical evaluation benchmarks are derived from held-out samples to reject any overlap with the training data in the systematic pipeline.
* For the general medical proficiency evaluations, we explicitly excluded the used webMedQA and CMD benchmarks from the training data to ensure the fairness and effectiveness of the assessments.
Pdf: /pdf/3a9de66104b56afdfbbcc91e0f8473f45bcfe115.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper presents PediatricsGPT, a specialized large language model (LLM) designed to assist in pediatric medical consultations in China, where there is a significant shortage of healthcare resources. PediatricsGPT is built upon PedCorpus, a high-quality dataset containing over 300,000 instructions from pediatric textbooks, guidelines, and knowledge graphs. The model undergoes a systematic training process that includes continuous pre-training, full-parameter supervised fine-tuning, human preference alignment, and parameter-efficient secondary supervised fine-tuning. This comprehensive approach aims to address the inadequacies of existing Chinese medical LLMs in pediatric applications.
Strengths: 1) The development of PedCorpus, a robust dataset with over 300,000 multi-task instructions from credible pediatric sources, ensures that PediatricsGPT has access to diverse and accurate medical knowledge.
2) The multi-phase training process, which includes continuous pre-training, full-parameter fine-tuning, and human preference optimization, enhances the model's capability to generate precise responses, and adapt to the complexities of pediatric medical consultations.
3) Experiments show that PediatricsGPT outperforms other Chinese medical LLMs, validating its effectiveness and reliability in real-world applications.
4) Manual human study is conducted.
Weaknesses: 1) Overall I believe the developed model with the curated datasets will be a useful resource. But I think the authors can probably better highlight the novelty in this paper and how the techniques employed in this paper differs from other works.
2) As one of the main contribution of this paper lie in the creation of the dataset, I am not sure if this paper is more suitable for the D&B track in NeurIPS. However, this does not affect my overall rating.
3) The improvements attributed to the universal expert appear marginal, raising questions about whether the gains justify the additional time and memory costs.
Technical Quality: 3
Clarity: 3
Questions for Authors: See above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our high-quality dataset, multi-phase training process, and comprehensive experiments. We present our detailed responses below.
**Q1**: Highlight the novelty of this paper and how it differs technically from other works.
**A1**: Thanks for the constructive comments. The novelty of this study encompasses both dataset and methodology technique dimensions.
For the dataset dimension, existing instruction data typically involve vanilla rephrasing of the general medical corpus or aggregation of doctor-like dialogues, which loses the specialization and focus in pediatric applications. More importantly, the current straightforward different round instruction construction paradigms fail to accommodate multi-task healthcare services in real-world scenarios, limiting the model generalization.
In comparison, the novelties of our PedCorpus derive from three characteristics.
* Task Diversity: besides containing generalist healthcare data, three application-oriented tasks are considered, including medical question-answer, evidence-based diagnosis, and treatment recommendation.
* Source Richness: distinct pediatric textbooks, guidelines, and knowledge graph resources provide solid assurance of medical knowledge's accuracy.
* Instruction Extensibility: vanilla instructions can be readily extended to seed instructions for generating specialized corpora to serve different training phases.
For the technique dimension, prior methods relied on Supervised Fine-Tuning (SFT) to compensate for medical instruction following capabilities, ignoring the discrepancies between inherent and externally absorbed knowledge within the models. This single pattern causes secondary LLMs to lapse into excessive role-playing rather than understanding. Despite a few attempts in the reinforcement learning from human feedback phases, their performance is restricted by actor-critic instability and online sampling bias.
In contrast, our technical novelty comes from several points.
* As stated in lines 54-56, we propose a hybrid instruction pre-training mechanism in Continuous Pre-Training (CPT) to bridge the capability weakening due to corpus format discrepancies between the internal and injected medical knowledge of foundation models, facilitating knowledge accumulation and extension.
* As stated in lines 169-176, we present a stable method for domain-specific LLMs called Direct Following Preference Optimization (DFPO). DFPO utilizes variable changes to formulate the preference loss as a policy function that efficiently optimizes the policy with a simple binary cross-entropy objective. Meanwhile, our method directly regularizes model behavior boundaries in an instruction-following paradigm on medical demonstrations of preferred responses, facilitating robustness and smoothing of the preference learning.
* As stated in lines 186-199, we devise a mixture of universal-specific experts strategy to tackle the competency conflict between medical generalist and pediatric expertise in secondary SFT, which strengthens the model's adaptability to distinct downstream tasks. Specific experts master different pediatric tasks by soft routing gating control with noise. Also, a universal expert is consistently activated to prevent general knowledge forgetting and mitigate competency conflict.
* As stated in lines 106-107, we propose a role-playing-driven instruction building rule to convert the consolidated textual data into instructions. The proposed rule is effective in generating accurate and reliable instruction knowledge for model training.
* As stated in lines 123-124, we design a progressive instruction reconstruction rule to distill the sampled instructions to ensure informative and logical model responses. Unlike the traditional self-instruct pattern of using APIs, our method guides APIs to take the perspective of the experienced doctor to complete progressive tasks in the given instruction and answer scenarios.
***
**Q2**: About the suitable NeurIPS track for this paper.
**A2**: We thank the reviewer for the overall recognition. As we mentioned in **A1**, this paper presents a high-quality PedCorpus and a systematic training pipeline to address the weaknesses of previous works in both the dataset and modeling methodology. Besides the dataset, our methodological contributions span different stages in the training pipeline to address the corresponding challenges. Specifically, we propose a hybrid instruction pre-training mechanism in the CPT stage to bridge the capability weakening due to corpus format discrepancies between the internal and injected medical knowledge of foundation models. In the intention alignment stage, a DFPO is devised to enhance response robustness and align human preferences. Moreover, we propose a mixture of universal-specific experts strategy to address competition across different pediatric tasks and the conflicts between medical generalization and specialized knowledge in the LoRA-based SFT stage.
We believe that these contributions make this study suitable for the Main Track.
***
**Q3**: Improvements on universal expert.
**A3**: As stated in line 198, the universal expert aims to prevent general knowledge forgetting. In this case, Table 3 clearly shows that the universal expert on the general healthcare benchmark CMD and webMedQA brings significant improvements of 2% and 2.5% on average under the GPT-4 and Doctor evaluation, respectively. Instead, the expert's marginal gains on the three specific pediatric tasks are justified since it is not its duty to acquire task-specific knowledge. Furthermore, the visualization results in Figure 6(b) provide evidence that the universal expert handles general healthcare.
It is worth noting that LoRA-based universal expert requires only 0.24% training parameters, implying almost negligible time and memory cost compared to the whole SFT procedure.
We promise to optimize the above clarifications in the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I will keep my score.
---
Rebuttal 2:
Comment: Dear Reviewer jQHJ:
We would like to thank the reviewer for taking the time to review our paper and for the comments.
Please kindly let us know if anything is unclear. We truly appreciate this opportunity to clarify our work and shall be most grateful for any feedback you could give to us.
Best regards,
Authors | Summary: This paper proposes the first Chinese pediatric Large Language Model (LLM) assistant, designed through a robust training pipeline including continuous pre-training, full-parameter supervised fine-tuning, and direct following preference optimization. The model is shown to outperform existing Chinese medical LLMs in pediatric tasks, and the authors intend to open-source both the model and dataset for community development.
Strengths: - First Chinese Pediatric LLM Assistant with Pediatric Expertise: PediatricsGPT represents a significant milestone as the first Chinese LLM specifically designed to assist with pediatric medical applications, demonstrating expertise in this specialized domain.
- Novel Domain Adaptation Paradigm: The paper introduces an innovative training pipeline that includes continuous pre-training, full-parameter supervised fine-tuning, and preference optimization, effectively adapting the model to the pediatric domain.
Weaknesses: - Ethical Issues of the Data: The paper lacks explicit details about the usage agreements for the data sources, including license information and open-source details. Given the data leakage issues inherent in LLMs, clarifying these aspects is crucial for the ethical release and use of both the model and dataset.
- Evaluation Datasets: The three main evaluated datasets appear to be private, raising concerns about their accessibility for further research and verification. The paper should clarify whether these datasets can be openly released and if there are any ethic issues of releasing these datasets.
- Privacy Concerns with GPT-4 API: The use of the GPT-4 API to build instruction data introduces potential privacy concerns, particularly regarding whether data from guidelines and knowledge bases can be legally and ethically fed into a commercialized API like GPT-4.
- Missing Representative Baselines: The paper does not include comparisons with leading medical LLMs such as Meditron, Me-LLaMA, and other advanced models like GPT-4, LLaMA3. Including these baselines would provide a more comprehensive evaluation of PediatricsGPT’s performance.
Technical Quality: 3
Clarity: 2
Questions for Authors: N/A
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects', 'Ethics review needed: Data privacy, copyright, and consent']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our contribution of significant milestone, novel domain adaptation paradigmm, and innovative training pipeline. Below are some specific responses. The common questions regarding the ethical issues can be found in the **global response**.
**Q1**: About the evaluation datasets.
**A1**: Constructive proposal! As stated in line 21, we are committed to open-sourcing datasets and models to advance the community. In this case, three main evaluated datasets will be made public at the same time to support further research and verification. All components of the database to be released are anonymized and desensitized to eliminate any potential ethical issues. Relevant discussions can be found in the **global response**. We promise to improve the open-source details in the revision.
***
**Q2**: Privacy concerns with GPT-4 API.
**A2**: We want to clarify several aspects to emphasize that the usage of the GPT-4 API in this study does not pose potential privacy concerns.
* The specialized pediatric data used contains only purely medical knowledge from textbooks, guidelines, and knowledge graphs without any personal or sensitive information. Based on this, we further filtered ambiguous and misleading content through meticulous manual screening to avoid any privacy concerns. As stated in lines 106-108 and Appendix A.1, the proposed role-playing-driven instruction construction rule assembles structured instructions only from the consolidated textual data. This means that our rule eliminates privacy concerns in the input stage of the API. In addition, we restrict the generation behavior of the API to a given secure corpus through carefully crafted prompts, which excludes the generation of potentially unethical content from the internal knowledge of the API.
* The collected conversational data has been approved by the institutional ethics committee, which is a prerequisite for us to access the conversational data. All data are strictly anonymized and de-identified to avoid ethical issues. Each participant signed a GDPR informed consent form, which allows the dataset to be publicly available for research purposes. In this case, our in-context learning strategy only regularizes concise vanilla conversations via the API without introducing privacy concerns.
* For the distilled medical datasets, the original gatherers have anonymized the relevant corpus to ensure no personal information is contained. As stated in lines 123-124 and Appendix A.3, our progressive instruction reconstruction rule only enhances the informativeness and density of medical knowledge in the instructions through the API without any privacy leakage.
* The above data resources comply with OpenAI's official terms and security criteria to allow researchers to use the corresponding API in a legal and ethical manner.
***
**Q3**: Regarding other representative baselines.
**A3**: Many thanks to the reviewer for the constructive proposal. We clarify the baseline selection and comparison through the following points.
* For a fair and intuitive comparison, the used baselines focus primarily on models that specialize in building Chinese language healthcare capabilities. This means the models' training corpus and response targets are predominantly Chinese rather than English throughout the whole training process.
* We follow the proposal to incorporate the English-oriented baselines to provide comprehensive insights. Among the listed LLMs, we implement models including Meditron-7B-1.0, LLaMA3-8B, and GPT-4. We are considering incorporating Me-LLaMA in the revision as we did not have permission to access its weights before the rebuttal deadline. It is worth noting that the submission deadline (2024/5/22) for the manuscript was earlier than the release (2024/6/5) of Me-LLaMA's weights.
* Tables 1 in the **Response.pdf** show the comparison results across different tasks on the MedKQ&A, EviDiag, and TreRecom benchmarks, respectively. PediatricsGPT-7B outperforms similarly sized models by large margins, proving the medical expertise of our model. Although other baselines are more competitive on the Distinct metric, which measures response diversity, the precision and recall of responses are more valuable in the medical domain. Furthermore, our 13B version achieves competitive or even better results compared to GPT-4 on the vast majority of metrics, showing the effectiveness of the proposed systematic training pipeline in building domain-specific models.
---
Rebuttal 2:
Comment: Dear Reviewer oou4:
We would like to thank the reviewer for taking the time to review our paper and for the comments.
Please kindly let us know if anything is unclear. We truly appreciate this opportunity to clarify our work and shall be most grateful for any feedback you could give to us.
Best regards,
Authors
---
Rebuttal 3:
Comment: Dear Reviewer oou4:
As the discussion period is closing, we sincerely look forward to your feedback. We deeply appreciate your valuable time and efforts. It would be very much appreciated if you could once again help review our responses and let us know if these address or partially address your concerns and if our explanations are heading in the right direction. Please also let us know if there are further questions or comments about this paper. We strive to improve the paper consistently, and it is our pleasure to have your feedback!
Best regards,
Authors | null | null | null | null |
Fairness-Aware Estimation of Graphical Models | Accept (poster) | Summary: This paper proposes a novel method for estimating graphical models, particularly Gaussian, Covariance, and Ising models, from data, taking fairness into consideration. Fairness is defined on the graph disparity error, i.e., the difference between the loss of the model for a particular group and the minimum loss for that group. The goal is to make the graph disparity errors among differnt groups equal or close. The paper proposes to use non-smooth multi-objective optimization to solve the problem. The paper shows that the method achieves the weak Pareto optimality, and the convergence rates analyzed. Experiments show the efficacy of the method.
Strengths: Originality
Despite extensive research in fair machine learning, this paper tackles a relatively under-studied problem, which is to conduct fair estimation of graphical models. The authors also propose a novel multi-objective proximal gradient method for GM estimation.
Quality
The proposed method is theoretical solid. The paper proves the weak Pareto opimality of the method, and also analyzes the global convergence rates for Gaussian, Covaiance, and Ising models. I didn’t check the correctness of the proofs.
The experiments were conducted using both synthetic and real-world datasets. The results clearly demonstrate improvement over the baseline method in terms of fairness.
Clarity
The paper is relatively well-written and orgainzed.
Significance
The experiments demonstrate that the proposed method can provide insights into sensitive group-related features, in addition to achieving fairness in GM estimation.
Weaknesses: - The optimality analysis requires the convexity of the loss function, and it remains unclear how the performance would be affected if the loss function were non-convex, which is common in machine learning. For example, if we use an encoding network to convert the data X into a representation space and compute the loss in the representation space, then the loss function would a non-convex.
- There is just one baseline used in the experiments, which was published in 2012. It could strengthen the experiment section if modern GM estimation methods are used as baselines.
Technical Quality: 4
Clarity: 3
Questions for Authors: - Can the proposed method be applied to the estimation of directed graphs, like causal graphs?
- In Algorithm 1, the first step is to initialize local graph estimates for all groups. When the number of groups is large, the sample sizes of certain groups could become small, which may affect the accuracy of the local graph estimates. How will the accuracy of the local graph estimates affect the global graph estimate?
Confidence: 2
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Weakness 1:** The optimality analysis requires the convexity of the loss function, and it remains unclear how the performance would be affected if the loss function were non-convex, which is common in machine learning. For example, if we use an encoding network to convert the data X into a representation space and compute the loss in the representation space, then the loss function would be non-convex.
**Response:** We appreciate the reviewer’s insightful comment. We completely agree that one limitation of our nonsmooth multi-objective optimization work is the reliance on the convexity of the loss function. However, many existing theoretical results and estimation algorithms for graphical models using single-objective optimization also rely on the convexity of objectives to estimate graphical models. For example, please refer to QUIC[[HSD2014](https://jmlr.org/papers/volume15/hsieh14a/hsieh14a.pdf)], SQUIC[[BS2016](http://www.icm.tu-bs.de/~bolle/Publicat/squic.pdf)], PISTA[[STY2023](https://arxiv.org/pdf/2205.10027)], GISTA[[GRR2012](https://arxiv.org/pdf/1211.2532)], OBN[[OON2012](https://papers.nips.cc/paper/2012/file/b3967a0e938dc2a6340e258630febd5a-Paper.pdf)], and ALM[[SG2010](https://proceedings.neurips.cc/paper/2010/file/2723d092b63885e0d7c260cc007e8b9d-Paper.pdf)]
On the other hand, we agree that converting the data $X$ into a representation space, such as using $UU^T$ in the objective or constraints, which can be used for clustering or computing the loss in the representation space, could make the problem nonconvex [[KYC2020](https://www.jmlr.org/papers/volume21/19-276/19-276.pdf)]. However, we know that standard proximal gradient methods for nonconvex optimization or their alternative variants for nonconvex alternating optimization (e.g., for $U$ and graph matrix $\Theta$) can still be applied [[BST2013](https://link.springer.com/article/10.1007/s10107-013-0701-9)]. Hence, the proximal multi-objective method developed for GMs in this paper can be extended to nonconvex graph optimization. This may require new proof techniques and estimations, which can be considered for future work. We will discuss this in the future work section.
> **Weakness 2:** There is just one baseline used in the experiments, which was published in 2012. It could strengthen the experiment section if modern GM estimation methods are used as baselines.
**Response:** Thank you for your feedback. We appreciate your suggestion to include more modern GM estimation methods as baselines. To address this, we have conducted additional experiments using several state-of-the-art GLasso algorithms. Specifically, we have applied different algorithms, including PISTA[[STY2023](https://arxiv.org/pdf/2205.10027)], GISTA[[GRR2012](https://arxiv.org/pdf/1211.2532)], and OBN[[OON2012](https://papers.nips.cc/paper/2012/file/b3967a0e938dc2a6340e258630febd5a-Paper.pdf)], to GLasso on synthetic data to demonstrate the robustness and effectiveness of our framework beyond ISTA.
ISTA was initially chosen as a baseline due to its simplicity and widespread use in sparse inverse covariance estimation. This provided a strong reference point for evaluating our method’s performance. However, recognizing the importance of a comprehensive evaluation, we have now compared our framework against more advanced baselines.
The results, detailed in the revised version of our paper and the **attached one-page PDF file**, show that our proposed framework consistently outperforms these modern baselines in terms of enhancing fairness while maintaining competitive performance. This comprehensive evaluation underscores the advantages of our approach across various scenarios.
> **Question 1:** Can the proposed method be applied to the estimation of directed graphs, like causal graphs?
Thank you for your insightful question. While our proposed method is primarily designed for undirected graphical models like Gaussian Graphical Models and Ising Models, the principles of our fairness-aware optimization framework could potentially be adapted for directed graphs, such as causal graphs. Unlike many fair supervised methods[[GKK2019](https://ieeexplore.ieee.org/abstract/document/8437807),[JCM2022](https://openaccess.thecvf.com/content/CVPR2022/html/Jung_Learning_Fair_Classifiers_With_Partially_Annotated_Group_Labels_CVPR_2022_paper.html),[PLL2022](https://openaccess.thecvf.com/content/CVPR2022/html/Park_Fair_Contrastive_Learning_for_Facial_Attribute_Classification_CVPR_2022_paper.html)], our method does not use label information, and unlike previous methods for fair clustering or community detection[[CKL2017](https://arxiv.org/abs/1802.05733),[BCF2019](https://arxiv.org/abs/1901.02393)], our method does not use node attributes. Our fairness metric preliminarily works with the disparity of global and local losses, i.e., using group-specific and global data to construct the objective. Hence, it is not specific graph-dependent as a fairness metric. Although the objective function differs for various graphical models, our new fairness notion (pairwise graph disparity error) remains the same.
Although our current work focuses on undirected graphs, we believe that with appropriate adjustments, our framework could be extended to handle directed graphs, ensuring fairness without disrupting causal dependencies. This presents a promising area for future research. Thank you for your constructive question, which has helped us consider the broader applicability of our method.
---
Rebuttal 2:
Title: Rebuttal by Authors (Part 2)
Comment: > **Question 2:** In Algorithm 1, the first step is to initialize local graph estimates for all groups. When the number of groups is large, the sample sizes of certain groups could become small, which may affect the accuracy of the local graph estimates. How will the accuracy of the local graph estimates affect the global graph estimate?
**Response:** Thank you for your insightful question. In Algorithm 1, we initialize local graph estimates for all groups, and we recognize that small sample sizes in certain groups could impact the accuracy of these local estimates. The accuracy of local graph estimates is indeed crucial, as these estimates form the foundation for the global graph estimate. When local estimates are based on small sample sizes, they may be less reliable, potentially introducing bias or variance that can propagate to the global graph estimate.
One possibility for handling the challenges of limited samples in certain subgroups is the bilevel optimization approach [[SXC2022](https://proceedings.neurips.cc/paper_files/paper/2022/file/dc96134e169de5aea1ba1fc34dfb8419-Paper-Conference.pdf)]. In this framework, the lower-level optimization learns subgroup-specific local graph matrices using the limited data available, guided by a fair model informed by the overall dataset. This approach is theoretically sound and empirically validated to improve fairness without sacrificing accuracy, even when some subgroups have limited samples. However, the bilevel formulation for graph learning can be computationally expensive as it requires hyper-gradient computations. Further, this approach requires novel estimation and convergence analysis. Therefore, we will add this as a future direction.
---
Rebuttal Comment 2.1:
Comment: Thank you for your thorough responses. I don't have further questions.
---
Reply to Comment 2.1.1:
Title: Response to Reviewer fakh
Comment: We appreciate your time and valuable suggestions. | Summary: This paper explores the issue of fairness in the estimation of graphical models (GMs), specifically Gaussian, Covariance, and Ising models. The authors introduce a comprehensive framework aimed at mitigating bias in GM estimation concerning protected attributes. This framework integrates pairwise graph disparity error and a customized loss function into a non-smooth multi-objective optimization problem. The pairwise graph disparity error assesses loss discrepancies across all groups, while the tailored loss function evaluates GM performance. Through this approach, the authors seek to achieve fairness across diverse sensitive groups while upholding GM performance, supported by theoretical proofs and experimental validation on various real-world datasets.
Strengths: The paper provides theoretical analysis, complemented by detailed appendices. To aid comprehension, the authors include examples throughout the paper, which help clarify the theories and proposed framework.
The experimental results demonstrate that the Fair GMs framework reduces disparity error, thereby enhancing fairness, with only a slight decrease in the model’s performance.
Additionally, the paper addresses the challenge of balancing performance and fairness by employing a non-smooth multi-objective optimization problem.
Weaknesses: In the introduction and related works section, the paper does not provide complete information about GMs. Additionally, the paper does not clearly explain why it focuses on only three types of GMs instead of providing a general framework. The author needs to clearly highlight any difficulties encountered when applying this framework to each type of model.
In the second paragraph of Section 2 (Related Work on Fairness), the author does not discuss related works concerning fairness in GMs estimation. By focusing solely on fairness in unsupervised learning, the paper may limit readers' ability to gain a comprehensive understanding of the research field and the distinctiveness of the proposed method.
Moreover, the paper lacks clarity regarding the statement: “Our paper addresses the challenge of learning GMs without any predefined assumptions on the graph’s structure.” In the referenced paper, "Fair Community Detection and Structure Learning in Heterogeneous Graphical Models," the authors assume the presence of community structures and aim to learn a sparse undirected graph where demographic groups are fairly represented within these communities. This concept could be relevant to the current paper when the number of clusters is set to one. Therefore, the paper should clarify the differences and advantages of learning GMs without any predefined assumptions in the theoretical section.
In Section 4.8, trade-off analysis, the author mentions the shortcomings in runtime without clear discussion. Although there is a significant difference in runtime compared to the standard models, no reasons or future related works are mentioned in the subsequent sections.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please explain the vagueness and confusion in the “Weaknesses” section.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The paper introduces a new framework that addresses fairness in the estimation of GMs, but it has some limitations. Firstly, the introduction lacks a comprehensive overview of current GMs and does not explain why the focus is on only three types of GMs rather than GMs in general. Secondly, the discussion of related work on fairness in GM estimation is insufficient and needs clearer explanations. Lastly, the implementation of different GMs requires distinct theorems with specific assumptions, which could limit the development and extension of the framework.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Weakness 1:** In the introduction and related works section, the paper does not provide complete information about GMs. Additionally, the paper does not clearly explain why it focuses on only three types of GMs instead of providing a general framework. The author needs to clearly highlight any difficulties encountered when applying this framework to each type of model.
> **Limitation 1:** Firstly, the introduction lacks a comprehensive overview of current GMs and does not explain why the focus is on only three types of GMs rather than GMs in general.
> **Limitation 3:** Lastly, the implementation of different GMs requires distinct theorems with specific assumptions, which could limit the development and extension of the framework.
**Response:** We appreciate the reviewer’s comments and would like to clarify our approach further. Our paper focuses on three types of GMs: Gaussian Graphical Models, Gaussian Covariance Graph Models, and Binary Ising Graphical Models. These models were chosen because they represent a diverse set of widely used GMs with distinct characteristics and applications. Each type poses unique challenges in terms of estimation and fairness, making them suitable for demonstrating the effectiveness and versatility of our proposed framework.
*Importance of Each Graph Model:* Each of the three graph models (GLasso, sparse covariance estimation, and binary Ising model) addressed in our paper is of significant interest to the research community. Each of these models has been extensively studied and typically warrants dedicated papers. Here are a few examples to illustrate their importance:
*GLasso*:
- Pavlenko, Tatjana, Anders Björkström, and Annika Tillander. "Covariance structure approximation via gLasso in high-dimensional supervised classification." Journal of Applied Statistics 39.8 (2012): 1643-1666.
- Mazumder, Rahul, and Trevor Hastie. "The graphical lasso: New insights and alternatives." Electronic journal of statistics 6 (2012): 2125.
- Friedman, Jerome, Trevor Hastie, and Robert Tibshirani. "Sparse inverse covariance estimation with the graphical lasso." Biostatistics 9.3 (2008): 432-441.
- Witten, Daniela M., Jerome H. Friedman, and Noah Simon. "New insights and faster computations for the graphical lasso." Journal of Computational and Graphical Statistics 20.4 (2011): 892-900.
*Sparse Covariance Estimation*:
- Bien, Jacob, and Robert J. Tibshirani. "Sparse estimation of a covariance matrix." Biometrika 98.4 (2011): 807-820.
- Friedman, Jerome, Trevor Hastie, and Robert Tibshirani. "Sparse inverse covariance estimation with the graphical lasso." Biostatistics 9.3 (2008): 432-441.
- Bickel, Peter J., and Elizaveta Levina. "Regularized estimation of large covariance matrices." (2008): 199-227.
- Cai, Tony, Weidong Liu, and Xi Luo. "A constrained ℓ 1 minimization approach to sparse precision matrix estimation." Journal of the American Statistical Association 106.494 (2011): 594-607.
*Binary Ising Model*:
- Ravikumar, Pradeep, Martin J. Wainwright, and John D. Lafferty. "High-dimensional Ising model selection using ℓ 1-regularized logistic regression." (2010): 1287-1319.
- Anandkumar, Animashree, et al. "High-dimensional structure estimation in Ising models: Local separation criterion." The Annals of Statistics (2012): 1346-1375.
*General Framework Applicability:* While our algorithm is indeed a general framework applicable to various types of graphs, we chose to focus on GLasso, sparse covariance estimation, and binary Ising models to provide detailed parameters and convergence guarantees specific to each model. These models were selected to demonstrate the versatility and robustness of our framework. A more general assumption might dilute these specific details and potentially miss important information relevant to each individual model. By concentrating on these three, we ensure a comprehensive analysis, which includes specific parameter tuning and convergence properties, thus providing a stronger and more practical contribution to the field.
In summary, our paper’s focus on these three specific GMs is a deliberate choice to balance the depth and breadth of our contributions, ensuring detailed and practically relevant results. We believe this approach is beneficial for the community, as it provides clear and actionable insights for these widely studied models.
> **Weakness 2:** In the second paragraph of Section 2 (Related Work on Fairness), the author does not discuss related works concerning fairness in GMs estimation. By focusing solely on fairness in unsupervised learning, the paper may limit readers' ability to gain a comprehensive understanding of the research field and the distinctiveness of the proposed method.
> **Limitation 2:** Secondly, the discussion of related work on fairness in GM estimation is insufficient and needs clearer explanations.
**Response:** We appreciate your comments and agree that a more comprehensive discussion on fairness in graph model (GM) estimation would enhance the reader’s understanding. In our revised manuscript, we will include a dedicated paragraph to review the related works, specifically focusing on fairness in GM estimation.
---
Rebuttal 2:
Comment: To our knowledge, There are indeed a few works on fair graphical models that significantly differ from this work. The most relevant works to this study are [[68](https://arxiv.org/abs/2112.05128), [ZW2023](https://arxiv.org/abs/2311.13766), [NRB2024](https://arxiv.org/abs/2403.15591), [ZW2024](https://www.sciencedirect.com/science/article/pii/S0925231224009810)]. Specifically, [[68](https://arxiv.org/abs/2112.05128)] initiated the learning of fair GMs using an $\ell_1$-regularized pseudo-likelihood method for joint GMs estimation and fair community detection. [[ZW2023](https://arxiv.org/abs/2311.13766), [ZW2024](https://www.sciencedirect.com/science/article/pii/S0925231224009810)] proposed a fair spectral clustering model that integrates graph construction, fair spectral embedding, and discretization into a single objective function. Unlike these models, which assume community structures, our study formulates fair GMs without such assumptions. Concurrently with this work, [[NRB2024](https://arxiv.org/abs/2403.15591)] proposed a regularization method for mitigating subpopulation bias for fair network topology inference. To our knowledge, this methodology significantly differs from ours, as we focus on developing three classes of fair GMs (Gaussian, Covariance, and Ising models) for imbalanced groups *without node attributes*, aiming to *automatically* ensure fairness through non-smooth multi-objective optimization.
> **Weakness 3:** Moreover, the paper ... predefined assumptions in the theoretical section.
**Response:** Thank you for your comment.
Firstly, the method in the referenced paper, ``Fair Community Detection and Structure Learning in Heterogeneous Graphical Models,'' is specifically designed for scenarios where there are inherent community or cluster structures in the data. If the clustering were based on the number of observations (N) instead of the number of variables (P), setting the number of clusters to one would essentially imply that there is only one community. This would not address the issue of fairness as intended by the method. In such a scenario, the concept of fair community detection loses its relevance since there would be no differentiation between groups to balance fairness. Hence, the method is not applicable when the number of clusters is equal to one.
Secondly, the method assumes that the nodes are clusterable. However, in our case, we do not make such assumptions, as many applications involve graphs with nested structures rather than distinct communities. For example, consider a sparse graph with hub nodes [[TML2014](https://www.jmlr.org/papers/volume15/tan14b/tan14b.pdf)] or star graphs. In this setting, the nodes may not form distinct communities, challenging the method's assumptions.
Finally, we do not assume the existence of attributes on nodes, whereas the referenced method does. This can restrict applications such as gene network analysis (Section 5.1), where nodes could be genes that do not have attributes. In brain amyloid/tau accumulation network analysis (Section 5.2), nodes are regions of interest (ROIs) that do not have sensitive attributes. The requirement for nodes to have attributes limits the method's applicability in these contexts.
We will clarify these distinctions in the related work section.
> **Weakness 4:** In Section 4.8, ... in the subsequent sections.
**Response:** We appreciate the reviewer’s feedback regarding the trade-off analysis in Section 4.8. We acknowledge that the discussion on the shortcomings in runtime and potential future work was insufficiently detailed. We have expanded our discussion to include specific strategies to address these issues and outline potential future work.
One promising direction to accelerate the graphical model part of our algorithm is the adoption of faster optimization techniques. Specifically, the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) has shown significant improvements in optimization efficiency in various contexts. By integrating FISTA into our framework, we can achieve considerable runtime reductions without compromising the accuracy and convergence properties of our models, as presented in **Table 2 in the attached one-page PDF file**.
Another factor impacting runtime is the number of groups considered. Reducing the number of groups or selecting representative sample groups can effectively manage computational complexity. One potential approach is the method proposed by [[SXC2022](https://openreview.net/pdf?id=YsRH6uVcx2l)], which suggests selecting a sample of groups randomly in each iteration of the optimization process to balance computational efficiency and model accuracy. By implementing a strategic sampling of groups, we can maintain the robustness and fairness of our models while significantly reducing the runtime. This approach will be explored in future work to optimize the balance between computational demands and model performance.
Title: Rebuttal by Authors (Part 2)
---
Rebuttal Comment 2.1:
Comment: Thank you for the response. I raised my rating.
---
Reply to Comment 2.1.1:
Title: Response to Reviewer 1oDX
Comment: Thank you for your review and valuable suggestions. | Summary: This paper introduces a framework to address fairness in the estimation of graphical models, particularly focusing on Gaussian, Covariance, and Ising models. The motivation stems from the potential bias in standard GMs when handling data involving sensitive characteristics or protected groups. The proposed framework integrates a pairwise graph disparity error and a tailored loss function into a nonsmooth multi-objective optimization problem to reduce bias while maintaining the effectiveness of GMs.
Strengths: 1) Clear motivation: The paper tackles the critical and timely issue of fairness in graphical models, which is essential as machine learning applications become more widespread and impact diverse populations.
2) Innovative Approach: The integration of the pairwise graph disparity error and tailored loss function into a multi-objective optimization framework is a novel approach to achieving fairness in GMs.
3) Substantial technical contribution: The paper provides a solid theoretical foundation for the proposed framework. And the appendix is very informative, for example included complexity analysis of the proposed method.
Weaknesses: Fairness metric: The choice and justification of fairness metrics used in the evaluation could be more thoroughly discussed. This would provide a clearer understanding of how fairness is quantified and the implications of these choices.
Unconvincing experiment results: While the authors claim that the proposed method "approach enhances fairness without compromising performance" in line 306, Table 1 indicates that 6 out of 7 datasets experience lower F1 scores, suggesting worse performance.
Limited baseline: The paper could benefit from more extensive comparisons with existing fairness-aware methods in graphical models other than ISTA (the only baseline used in Table 2). This would help to contextualize the performance and advantages of the proposed framework.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1) Could you elaborate more on the choice of fairness metrics, disparity error, used in your evaluation? What might be the implications for using such metric compared to other metrics, like those stemming from counterfactual evaluation?
2) While the authors claim that the proposed method improves fairness without compromising performance, Table 1 indicates that 6 out of 7 datasets experience lower F1 scores, suggesting worse performance. Could you clarify this discrepancy and provide a more detailed interpretation of Table 1?
3) It's mentioned that ISTA is used as baseline. What are other possible baselines? Why in particular choose ISTA as baseline?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The paper discuss the complexity limitation of the work but did not solve it.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Weakness 1:** Fairness metric: The choice and justification of fairness metrics used in the evaluation could be more thoroughly discussed. This would provide a clearer understanding of how fairness is quantified and the implications of these choices.
> **Question 1:** Could you elaborate more on the choice of fairness metrics, disparity error, used in your evaluation? What might be the implications for using such metric compared to other metrics, like those stemming from counterfactual evaluation?
**Response:** We appreciate the reviewers’s feedback on the need for a more thorough discussion of the fairness metrics used in our evaluation. We understand that the choice and justification of fairness metrics are critical for providing a clear understanding of how fairness is quantified and the implications of these choices.
In our work, we introduced the concept of graph disparity error as the primary fairness metric. This metric measures the difference in loss between the global model $\Theta$ and the optimal local models $\Theta_1^*, \Theta_2^*, \ldots, \Theta_K^*$ for each subgroup. The rationale behind this choice is rooted in the unsupervised nature of our task. Unlike supervised learning, which can leverage labeled data to define fairness metrics such as demographic parity (DP)[[JHF2022](https://par.nsf.gov/servlets/purl/10397778)] or equalized odds (EO)[[RBC2020](https://proceedings.neurips.cc/paper/2020/hash/03593ce517feac573fdaafa6dcedef61-Abstract.html)], our unsupervised learning context lacks such labels. Therefore, we require a fairness metric that does not rely on label information.
The graph disparity error metric was chosen because it effectively captures the notion of fairness by ensuring that the model’s performance is balanced across different subgroups. By minimizing this error, we aim to ensure that no subgroup is disproportionately favored or disadvantaged, thereby promoting a more equitable model. Despite this, our proposed graph disparity error actually aligns with the goals of DP by promoting a balanced representation of all subgroups. The global graph obtained from our method represents a central integration of local graphs, ensuring that the influence of each subgroup is balanced. This inherent balance aligns with the core objective of DP without the use of labeled data.
In our revised version, we further include the following paragraph to improve the discussion of the fairness metrics:
“In this work, we employ graph disparity error as our primary fairness metric. This metric measures the difference in loss between the global model and the optimal local models for each subgroup. Our choice is motivated by the unsupervised nature of our task, which lacks labeled data typically required for metrics such as demographic parity or equalized odds. Graph disparity error provides a label-independent measure of fairness, ensuring balanced model performance across subgroups. By minimizing this error, we reduce bias introduced by the data distribution, promoting an equitable model where no subgroup is disproportionately favored or disadvantaged. This approach aligns with the core objectives of demographic parity, as our global graph integrates local graphs, balancing the influence of each subgroup. Thus, our metric effectively quantifies fairness in high-dimensional graphical models, providing a robust framework for fair graph learning.”
> **Weakness 2:** Unconvincing experiment results: While the authors claim that the proposed method "approach enhances fairness without compromising performance" in line 306, Table 1 indicates that 6 out of 7 datasets experience lower F1 scores, suggesting worse performance.
> **Question 2:** While the authors claim that the proposed method improves fairness without compromising performance, Table 1 indicates that 6 out of 7 datasets experience lower F1 scores, suggesting worse performance. Could you clarify this discrepancy and provide a more detailed interpretation of Table 1?
**Response:** We appreciate the reviewer’s comments on the experimental results and the interpretation of Table 1. We would like to clarify our claim that the proposed method enhances fairness without compromising performance.
Firstly, it is important to recognize the inherent trade-off in multi-objective optimization frameworks that balance fairness and accuracy. While our method prioritizes fairness, this may lead to a slight decrease in the $F_1$ score, as reflected in Table 1. However, these decreases are minimal and within acceptable ranges, indicating that the overall performance of the model remains robust. The marginal reductions in $F_1$ scores result from the necessary adjustments to ensure fair representation across all subgroups. We include this analysis in Section 4.8, Trade-Off Analysis.
To be more accurate, we would like to clarify our claim in our revised version as follows:
“This approach enhances fairness without compromising performance significantly, validated by experiments on synthetic and real-world datasets.”
---
Rebuttal 2:
Title: Rebuttal by Authors (Part 2)
Comment: > **Weakness 3:** Limited baseline: The paper could benefit from more extensive comparisons with existing fairness-aware methods in graphical models other than ISTA (the only baseline used in Table 2). This would help to contextualize the performance and advantages of the proposed framework.
> **Question 3:** It's mentioned that ISTA is used as baseline. What are other possible baselines? Why in particular choose ISTA as baseline?
**Response:** Thank you for your valuable feedback. In response to your concern, we have conducted additional experiments to apply our multi-objective optimization framework to several state-of-the-art GLasso algorithms. We specifically focused on synthetic data to showcase the robustness and effectiveness of our framework beyond the ISTA algorithm.
We initially chose ISTA as a baseline due to its simplicity and widespread use in sparse inverse covariance estimation, making it an ideal reference point. However, understanding the necessity for a more comprehensive evaluation, we have now included comparisons with advanced baselines such as PISTA[[STY2023](https://arxiv.org/pdf/2205.10027)], GISTA[[GRR2012](https://arxiv.org/pdf/1211.2532)], and OBN[[OON2012](https://papers.nips.cc/paper/2012/file/b3967a0e938dc2a6340e258630febd5a-Paper.pdf)].
Our additional experiments, detailed in the revised version of our paper, demonstrate that our framework consistently outperforms these advanced baselines in enhancing fairness while maintaining competitive performance. Specifically, all new baseline methods reached the optimal loss for GLasso. When applying our multi-objective optimization framework, Fair GLasso achieved comparable results and significantly reduced pairwise graph disparity error. This comprehensive evaluation underscores the advantages of our approach in various scenarios, reinforcing its robustness and effectiveness.
> **Limitation:** The paper discuss the complexity limitation of the work but did not solve it.
**Response:** We appreciate the reviewer’s observation regarding the time-consuming nature of our learning algorithm. Given our use of a multi-objective optimization framework, it is expected that the process would be more time-intensive. Nonetheless, our primary goal remains to offer a simple yet effective framework for fair graph learning. We recognize the need to enhance the computational efficiency of our method and outline potential directions to achieve this.
Firstly, the time complexity is partly due to the local graph learning phase. This can be significantly accelerated by utilizing fast graphical model algorithms such as QUIC[[HSD2014](https://jmlr.org/papers/volume15/hsieh14a/hsieh14a.pdf)], SQUIC[[BS2016](http://www.icm.tu-bs.de/~bolle/Publicat/squic.pdf)], PISTA[[STY2023](https://arxiv.org/pdf/2205.10027)], GISTA[[GRR2012](https://arxiv.org/pdf/1211.2532)], OBN[[OON2012](https://papers.nips.cc/paper/2012/file/b3967a0e938dc2a6340e258630febd5a-Paper.pdf)], and ALM[[SG2010](https://proceedings.neurips.cc/paper/2010/file/2723d092b63885e0d7c260cc007e8b9d-Paper.pdf)]. Integrating these faster algorithms into the multi-objective optimization process can substantially improve the overall efficiency of our learning algorithm.
Secondly, the growing number of objectives in the multi-objective optimization solver can also increase time complexity. To address this, we propose randomly selecting a subset of objectives in each iteration. This approach can reduce the computational load while preserving the model's fairness and performance.
To support our discussion, we conducted additional experiments using GLasso. In the first experiment, we generated synthetic data, including two subgroups, as described in the appendix of our paper. We applied various optimization algorithms to both the local graph learning and multi-objective optimization processes. The detailed numerical results, presented in **Table 2 of the attached one-page PDF file**, show that all tested optimization algorithms for GLasso achieved optimal loss while maintaining fairness improvements. Notably, GISTA and OBN for GLasso, along with FISTA for multi-objective optimization, significantly enhanced learning efficiency.
In the second experiment, we generated a synthetic dataset with ten subgroups to validate our strategy for reducing time complexity. In each iteration of the multi-objective optimization, we randomly selected three objectives. The numerical results in **Table 2 of the attached one-page PDF file** indicate that this strategy effectively reduces runtime without substantially compromising model performance.
These findings underscore the potential of our proposed methods to improve the computational efficiency of the learning algorithm while maintaining fairness and performance. We appreciate the reviewer’s feedback and believe these enhancements will further strengthen our framework.
---
Rebuttal 3:
Comment: I thank the authors for providing additional results. I think the experiment sections looks stronger now and is willing to raise my score by 1.
---
Rebuttal Comment 3.1:
Title: Response to Reviewer tjnm
Comment: Thank you for the thorough review and positive feedback. | Summary: The paper investigates the issue of bias in 3 particular graphical models: Gaussian, Gaussian Covariance, and Binary Ising. In this regard, the authors propose a framework to enhance fairness in the estimation of graph models. They incorporate the difference of loss between the protected groups and the accuracy of graph models into a multi-objective optimization problem and solve it. The experiments show that the approach reduce the bias.
Strengths: 1. The paper is easy to follow
2. Choice of ISTA as the baseline seems a very good decision
3. Applications of the proposed method are well established in the paper.
Weaknesses: 1. The authors only consider one criterion of fairness, which is achieving equal loss among subgroups. How do the authors justify focusing solely on this criterion? Additionally, how might incorporating other fairness criteria impact the results?
2. Many recent works utilize reinforcement learning to discover fair structures. For instance, the paper "Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face Recognition" by S. Dooley et al. (NeurIPS 2023) explores such an approach. How do the authors justify their method compared to these more comprehensive approaches?
3. The authors employ multi-objective optimization, suggesting that the loss in GMs and pairwise graph disparity are at odds. However, many papers indicate that this is not always the case. For example, the paper "Is There a Trade-Off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing" by S. Dutta et al. (ICML 2020) discusses this. How do the authors justify their approach in light of such findings?
4. The learning algorithm appears to be very time-consuming. Do the authors have any comments on this aspect of their method?
5. Why do the authors not use more widely accepted metrics for fairness comparison and instead refer only to the difference in loss between subgroups?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses as well as these questions:
1. Practical graphs and practical fairness problems have high K and P values. This will raise the computational complexity to the extent that it may question the applicability of the method. Do the authors have comment on this?
2. What if the number of protected groups (like in the case of age) is high? How do the authors comment on these cases with regards to complexity analysis?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. The applicability for real problems
2. It does not scale well
3. The study is limited to Gaussian, Gaussian Covariance, and Binary Ising models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Weakness 1:** The authors only consider one criterion of fairness, which is achieving equal loss among subgroups. How do the authors justify focusing solely on this criterion? Additionally, how might incorporating other fairness criteria impact the results?
> **Weakness 5:** Why do the authors not use more widely accepted metrics for fairness comparison and instead refer only to the difference in loss between subgroups?
**Response:** Thank you for your insightful comments. Our primary focus on the difference in loss between subgroups as a fairness criterion stems from the unsupervised nature of our task. In unsupervised learning, unlike supervised learning, we lack labeled data that are typically used to define fairness criteria such as demographic parity (DP)[[JHF2022](https://par.nsf.gov/servlets/purl/10397778)] and equalized odds (EO)[[RBC2020](https://proceedings.neurips.cc/paper/2020/hash/03593ce517feac573fdaafa6dcedef61-Abstract.html)]. Specifically, incorporating other fairness criteria like DP and EO, while beneficial for providing a broader perspective on fairness, typically requires labeled data to assess prediction distributions and accuracy across groups. This makes them less applicable to our unsupervised context.
This necessitated the development of a new fairness criterion, which we introduce as *graph disparity error*, **solely based on the group-specific and global losses**. Indeed, our proposed graph disparity error aligns with the goals of DP by promoting a balanced representation of all subgroups using the loss function. Graph disparity error measures the difference in graph loss between a global model $\Theta$ and the optimal local models $\Theta_1^*, \Theta_2^*, \ldots, \Theta_K^*$ for each subgroup $k \in [K]$. This is validated through our experiments in Section 4. For example, Figure 2 illustrates the comparison of original graphs utilized in synthetic data creation for two groups: graph reconstruction using standard GMs and fair graph reconstruction via Fair GMs. The figure demonstrates that Fair GMs produce more balanced and fair graph reconstructions across different subgroups compared to standard GMs, thereby supporting the effectiveness of the proposed graph disparity error in achieving fairness.
While future research on graphical model estimation could explore more widely accepted fairness metrics in supervised learning, our current work establishes a disparity error-based approach. It is worth mentioning that our nonsmooth multi-objective optimization framework for graphs is novel in this area and flexible enough to incorporate other fairness objectives. Future research can build on our framework to explore additional fairness metrics.
We have incorporated some of these discussions into the introduction of the revised paper. Thank you.
> **Weakness 2:** Many recent works utilize reinforcement learning to discover fair structures. For instance, the paper "Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face Recognition" by S. Dooley et al. (NeurIPS 2023) explores such an approach. How do the authors justify their method compared to these more comprehensive approaches?
**Response:** We appreciate your comments and suggestions. The above paper explores the inherent biases in neural network architectures used for face recognition and proposes a novel approach to mitigate these biases through neural architecture search (NAS) and hyperparameter optimization (HPO). By conducting the first large-scale analysis of the impact of different architectures and hyperparameters on bias, the authors discovered that biases are baked into the architectures themselves. They designed a search space based on high-performing architectures and used NAS and HPO to jointly optimize for fairness and accuracy, considering sensitive attributes.
Although your suggested reference might be useful in the context of neural networks, such as graph neural networks, our problem is significantly different and is not related to neural architecture search or supervised learning. Indeed, our model focuses on unsupervised fair graph learning and does not involve any neural architecture search. Many standard fairness metrics, such as demographic parity or equalized odds, originally developed for supervised learning and widely used in neural networks, may not apply here. Our method is inspired by the idea that equal loss among subgroups ensures that no single group is disproportionately disadvantaged by the learned model, which aligns with established fairness definitions in the unsupervised machine learning literature.
We have added further discussion to the revised paper. Thank you.
---
Rebuttal 2:
Title: Rebuttal by Authors (Part 2)
Comment: > **Weakness 3:** The authors employ multi-objective optimization, suggesting that the loss in GMs and pairwise graph disparity are at odds. However, many papers indicate that this is not always the case. For example, the paper "Is There a Trade-Off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing" by S. Dutta et al. (ICML 2020) discusses this. How do the authors justify their approach in light of such findings?
**Response:** We appreciate the reviewer's comment on the trade-off between fairness and accuracy. The paper by S. Dutta et al. (ICML 2020) argues that the observed trade-off may stem from the real-world application of biased datasets rather than an intrinsic limitation of the algorithms themselves. They emphasize the importance of considering ideal distributions when evaluating fairness and accuracy and highlight the potential of active data collection to improve fairness without compromising accuracy.
In our work, we focus on the algorithm's fairness. Classical graphical models aim to achieve optimal loss for the given data distribution but may propagate and amplify existing biases. Our fairness-aware optimization framework aims to mitigate these biases systematically at the algorithmic level, independent of the data distribution.
While classical graphical models achieve optimal loss, they do not inherently address fairness concerns. Our multi-objective optimization framework explicitly incorporates a fairness criterion, balancing the trade-off between minimizing loss and reducing pairwise graph disparity. Multi-objective optimization is a promising approach for integrating fairness considerations into machine learning models. Prior works, such as [[MBS2020](https://arxiv.org/pdf/2011.01821.pdf)], have formulated group fairness as a multi-objective optimization problem with separate objectives for each sensitive group’s risk. This strategy aligns closely with works like [[PAG2021](https://arxiv.org/pdf/2009.04441.pdf)] and [[MSG2021](https://arxiv.org/pdf/2110.01951.pdf)], which enhance fairness in classification. Further exploration by [[PBD2022](https://arxiv.org/pdf/2006.06137.pdf)], and [[KHF2018](https://arxiv.org/pdf/1911.04931.pdf)] demonstrates the effectiveness of multi-objective optimization in improving fairness in unsupervised PCA algorithms.
These efforts underscore the growing consensus on the potential of multi-objective optimization to address fairness issues in machine learning. On the other hand, existing solutions may not adequately address fairness in nonsmooth or high-dimensional settings, where our framework offers novel insights and solutions. To our knowledge, ours is the first nonsmooth multi-criteria method for fairness, applicable to other problems involving nonsmooth regularizations such as sparse fair PCA or fair supervised learning based on ERM with sparsity/norm-one regularization.
---
Rebuttal 3:
Title: Rebuttal by Authors (Part 3)
Comment: > **Weakness 4:** The learning algorithm appears to be very time-consuming. Do the authors have any comments on this aspect of their method?
> **Question 1:** Practical graphs and practical fairness problems have high K and P values. This will raise the computational complexity to the extent that it may question the applicability of the method. Do the authors have comment on this?
> **Question 2:** What if the number of protected groups (like in the case of age) is high? How do the authors comment on these cases with regards to complexity analysis?
> **Limitation 2:** It does not scale well
**Response:** We appreciate the reviewer’s observation regarding the time-consuming nature of our learning algorithm. Given the multi-objective optimization framework we employ, it is indeed expected to be more time-intensive. However, our primary goal is to provide an automatic (without any additional regularization or related hyperparameter tuning) and effective framework for fair graph learning. We acknowledge the importance of improving the computational efficiency of our method and outline potential future directions to address this concern.
Firstly, the time complexity partly arises from the local graph learning phase. This can be significantly accelerated using existing fast graphical model algorithms such as QUIC[[HSD2014](https://jmlr.org/papers/volume15/hsieh14a/hsieh14a.pdf)], SQUIC[[BS2016](http://www.icm.tu-bs.de/~bolle/Publicat/squic.pdf)], PISTA[[STY2023](https://arxiv.org/pdf/2205.10027)], GISTA[[GRR2012](https://arxiv.org/pdf/1211.2532)], OBN[[OON2012](https://papers.nips.cc/paper/2012/file/b3967a0e938dc2a6340e258630febd5a-Paper.pdf)], and ALM[[SG2010](https://proceedings.neurips.cc/paper/2010/file/2723d092b63885e0d7c260cc007e8b9d-Paper.pdf)]. Incorporating faster algorithms into the multi-objective optimization process can also enhance the overall efficiency of our learning algorithm.
Secondly, the increase in time complexity can also be attributed to the growing number of objectives in the multi-objective optimization solver. To mitigate this, we propose selecting a subset of objectives randomly in each iteration. This approach can reduce the computational load without significantly compromising the model's fairness and performance.
To validate our discussion, we conducted additional experiments using GLasso. In the first experiment, we generated synthetic data following the procedure detailed in the appendix of our paper, which includes two subgroups, each having 1000 observations and 100 variables. We applied various optimization algorithms to both the local graph learning and the multi-objective optimization processes. The detailed numerical results, presented in **Table 2 of the attached one-page PDF** file, demonstrate that all tested optimization algorithms for GLasso achieved optimal loss while maintaining performance improvements in fairness. Notably, GISTA and OBN for GLasso, along with FISTA for multi-objective optimization, significantly improved learning efficiency. We also repeated this experiment for high P values (P = 200), and the detailed numerical results presented in **Table 2 of the attached one-page PDF** further validate the improved efficiency of faster graphical model algorithms.
In the second experiment, we generated a synthetic dataset with ten subgroups to validate our approach to reducing time complexity. In each iteration of the multi-objective optimization, we randomly selected three objectives. The numerical results in **Table 2 of the attached one-page PDF file** indicate that this strategy effectively reduces runtime without substantially sacrificing model performance.
These results highlight the potential of our proposed methods to improve the computational efficiency of the learning algorithm while maintaining fairness and performance. We appreciate the reviewer’s feedback and believe these enhancements will further strengthen our framework.
---
Rebuttal 4:
Title: Rebuttal by Authors (Part 4)
Comment: > **Limitation 1:** The applicability for real problems
**Response:** Thank you for your comment. We have provided the real-world applications of our proposed method in Section 4 ( Experiment). Specifically, in Sections 4.4 and 4.5, we analyzed the application of the fair GLasso on the Gene Regulatory Network and the Brain Amyloid/Tau Accumulation Network. Section 4.6 explored the application of Fair CovGraph in Credit Dataset, which promotes a more equitable credit system. In addition, we demonstrated the application of the Fair binary Ising model on Music Recommendation Systems in Section 4.7. These real-world examples illustrate the practical applicability of our proposed method.
> **Limitation 3:** The study is limited to Gaussian, Gaussian Covariance, and Binary Ising models.
**Response:** Thank you for your comment. Our paper focuses on three types of graphical models: Gaussian Graphical Models, Gaussian Covariance Graph Models, and Binary Ising Graphical Models. These models were chosen because they represent a diverse set of widely used GMs with distinct characteristics and applications. Indeed, each of these models is of significant interest to the research community and has been extensively studied. For example:
* GLasso: Widely used for sparse inverse covariance estimation, as demonstrated by Friedman et al. (2008) [[FHT2008](https://academic.oup.com/biostatistics/article/9/3/432/224260)] and Witten et al. (2011)[[WFS2011](https://www.jstor.org/stable/23248939)].
* Sparse Covariance Estimation: Important for large covariance matrix estimation, with notable contributions from Bickel and Levina (2008)[[BL2008](https://projecteuclid.org/journals/annals-of-statistics/volume-36/issue-1/Regularized-estimation-of-large-covariance-matrices/10.1214/009053607000000758.full)] and Cai et al. (2011)[[CLL2011](https://www.tandfonline.com/doi/abs/10.1198/jasa.2011.tm10155)].
* Binary Ising Model: Crucial for high-dimensional discrete data, highlighted by the work of Ravikumar et al. (2010)[[RWL2010](https://projecteuclid.org/journals/annals-of-statistics/volume-38/issue-3/High-dimensional-Ising-model-selection-using-ℓ1-regularized-logistic-regression/10.1214/09-AOS691.full)].
While our algorithm is a general framework applicable to various types of graphs, we focused on GLasso, sparse covariance estimation, and binary Ising models to provide detailed parameters and convergence guarantees specific to each model. This focus ensures a comprehensive analysis, including specific parameter tuning and convergence properties, thus providing a stronger and more practical contribution to the field. Generalizing too broadly might dilute these specific details and overlook important model-specific insights.
In summary, our focus on these three specific GMs balances the depth and breadth of our contributions, ensuring detailed and practically relevant results. We believe this graphical model estimation approach is beneficial for the community, offering clear and actionable insights for these widely studied graphical models.
---
Rebuttal Comment 4.1:
Comment: I have noticed that the reviewer "1oDX" also asked about why the authors focus only on Gaussian case. This needs to be discussed in the paper.
---
Rebuttal 5:
Comment: I thank the authors for addressing my concerns. I have read the rebuttal.
Here are my post-rebuttal comments:
Weakness 1 and 5:
The loss disparity between K sub-groups seems rational to me. Just please bold in the paper that you need to have access to the label of the sensitive attribute.
Weakness2:
What I meant was to using reinforcement learning instead of multi-objective optimization. Please discuss it in the related work section.
weakness 3:
Nothing new is added by the authors. My question remains.
weakness 4:
I agree with the authors. Using the strategies, the computational overhead can indeed be reduced.
Limitation 1:
Thanks for bringing it into my attention.
Limitation 3:
Nothing new is added and the question remains. The authors can have a brief discussion about this in the paper.
Given all the comments, I stick to my rating.
---
Rebuttal 6:
Title: Response to Reviewer nXfy
Comment: > The study is limited to Gaussian, Gaussian Covariance, and Binary Ising models.
> I have noticed that the reviewer "1oDX" also asked about why the authors focus only on Gaussian case. This needs to be discussed in the paper.
We sincerely appreciate your feedback and the time you’ve taken to review our work. We would like to clarify that our work encompasses more than just the Gaussian case. Specifically, we address three canonical graphical models in statistical learning:
* Inverse Covariance Estimation,
* Covariance Estimation,
and
* Ising Model Estimation.
It is crucial to emphasize that the Ising model is not Gaussian; it represents a distinct type of graphical model, particularly used for binary data.
Moreover, many other types of graphical models or statistical graph learning approaches can be viewed as variants derived from these canonical models.
We further elaborate on this below.
Firstly, we review two canonical types of graphical models (we have eliminated the discussion on Covariance Estimation as it falls under the category of Gaussian models for continuous data):
**Ising Model for Binary Data:** This model is defined by a probability distribution over binary variables. The distribution is determined by the so-called *Boltzmann distribution*:
$$p(\mathbf{x}; \Theta) = \left(Z(\Theta)\right)^{-1} \exp ( \sum_{j=1}^{P} \theta_{jj}x_j + \sum_{1 \leq j < j' \leq P} \theta_{jj'}x_jx_{j'} ).
$$
Here, $\Theta$ (graph matrix) is a symmetric matrix, and $Z(\Theta)$ is the partition function that normalizes the density.
These models are primarily designed for binary data [[HT2009](https://www.jmlr.org/papers/volume10/hoefling09a/hoefling09a.pdf), [RWL2010](https://projecteuclid.org/journals/annals-of-statistics/volume-38/issue-3/----Custom-HTML----High/10.1214/09-AOS691.short)].
**Gaussian Graphical Model for Continuous Data:** This model is suitable for continuous variables. The Gaussian graphical model is defined by the following distribution:
$$
p(\mathbf{x}; \Theta) = \frac{1}{(2\pi)^{P/2} |\Sigma|^{1/2}} \exp\left(-\frac{1}{2} \mathbf{x}^T \Sigma^{-1} \mathbf{x}\right)
$$
where $\mathbf{x} \sim \mathcal{N}(0, \Sigma)$, $\Sigma$ is the covariance matrix, and $\Theta=\Sigma^{-1}$ ( graph matrix) represents the precision matrix.
This model is widely used for continuous data [[MB2006](https://projecteuclid.org/journals/annals-of-statistics/volume-34/issue-3/High-dimensional-graphs-and-variable-selection-with-the-Lasso/10.1214/009053606000000281.short),
[YL2007](https://academic.oup.com/jrsssb/article-abstract/68/1/49/7110631),
[ABG2008](https://epubs.siam.org/doi/abs/10.1137/060670985),
[FHT2008](https://academic.oup.com/biostatistics/article-abstract/9/3/432/224260),
[RZY2008](https://arxiv.org/abs/0807.3734)].
Secondly, as we mentioned earlier, the Ising model is not designed for Gaussian cases but specifically for binary data rather than continuous data. This distinction makes the Ising model particularly suitable for applications involving binary data, such as modeling interactions between binary states in statistical physics or capturing binary decisions in social networks.
Finally, many other types of graphical models or statistical graph learning approaches can be viewed as variants derived from these two canonical models. For example,
* *Graphical models for ordinal variables* [[GLM2015](https://doi.org/10.1080/10618600.2014.889023)] assumes that the (categorical) data are generated by discretizing the marginal distributions of a latent multivariate Gaussian distribution.
* *Mixed Graphical Models* [[CLL2017](https://doi.org/10.1080/10618600.2016.1237362)] combines Ising and Gaussian models to handle mixed data, including both continuous and discrete variables.
* *Structured Graphical Models* [[KYC2020](https://www.jmlr.org/papers/volume21/19-276/19-276.pdf)] combines Gaussian graphical models with spectral graph theory for joint clustering and graph learning, as well as bipartite graph learning.
Our current framework effectively addresses these canonical forms of graphical models—Gaussian for continuous data and Ising for binary data—within a single paper.
Additionally, it can be extended to other models derived from these canonical forms, some of which are listed above.
We believe that our non-smooth multi-objective framework is novel for graphical model analysis and addressing fairness issues. It provides tools for a rich class of graphical model estimation, and corrects the biases present in traditional approaches to graphical model estimation. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time in providing feedback and questions about our submitted paper. Below, we summarize the main issues that the reviewers raised, along with a summary of our responses. Furthermore, we provide **new real experiments** to address the reviewers' comments, which are **attached as an additional PDF file**.
* **Fairness Metrics (By nXfy):** The reviewer asked about the use of only one criterion of fairness, achieving equal loss among subgroups, and the lack of widely accepted metrics for fairness comparison.
* **Computational Complexity (for large \(P\) and \(N\)) (By nXfy):** The reviewer noted concerns about the computational complexity of our method, particularly for large \(P\) and \(N\) values.
* **Limited Baselines (By tjnm, fakh):** Reviewers tjnm and fakh pointed out that the paper could benefit from more extensive comparisons with existing fairness-aware methods in graphical models.
* **Related Work on Fairness (By 1oDX):** The reviewer suggested that the related works section should provide more comprehensive information about fairness in GMs estimation and the distinctiveness of our method.
* **Non-convex Objective (By fakh)** The reviewer raised concerns about the performance of our method under non-convex loss functions, which are common in machine learning.
We have taken several actions to address these concerns. More specifically:
**A1:** Our primary focus on the difference in loss between subgroups as a fairness criterion stems from the unsupervised nature of our task. In unsupervised learning, we lack labeled data that are typically used to define fairness criteria such as demographic parity (DP)[[JHF2022](https://par.nsf.gov/servlets/purl/10397778)] and equalized odds (EO)[[RBC2020](https://proceedings.neurips.cc/paper/2020/hash/03593ce517feac573fdaafa6dcedef61-Abstract.html)]. Incorporating other fairness criteria like DP and EO requires labeled data to assess prediction distributions and accuracy across groups, which is less applicable in our context. Thus, we introduced the *graph disparity error* based on group-specific and global losses. This metric aligns with DP by promoting a balanced representation of subgroups through the loss function.
While future research could explore more widely accepted fairness metrics in supervised learning, our current work establishes a disparity error-based approach. Our framework is novel in this area and flexible enough to incorporate other fairness objectives, and we have included discussions of these points in the revised paper.
Further details are provided in response to Reviewer nXfy.
**A2:** We appreciate the reviewer's observation regarding the time-consuming nature of our algorithm. The time complexity arises partly from the local graph learning phase, which can be accelerated using existing fast graphical model algorithms. Additionally, the growing number of objectives in the multi-objective optimization solver increases time complexity. To reduce the computational load, we propose selecting a subset of objectives randomly in each iteration.
To validate our approach, we conducted additional experiments using various optimization algorithms for GLasso and found significant improvements in learning efficiency with our proposed methods. These enhancements are discussed in the revised paper.
Further details are provided in response to Reviewers tjnm and fakh.
**A3:** We have now included additional experiments applying our method to several state-of-the-art GLasso algorithms beyond ISTA. The results show that our framework consistently outperforms advanced baselines in enhancing fairness while maintaining competitive performance. These comparisons highlight the advantages of our approach and are detailed in the revised version of our paper.
Further details are provided in the response to Reviewer 1oDX.
**A4:** We agree that a more comprehensive discussion on fairness in GM estimation would enhance the reader’s understanding. In the revised manuscript, we have included a dedicated paragraph reviewing recent advancements in fairness for graphical models, such as Fair GLASSO and fair structure learning in heterogeneous graphical models. These additions clarify the context and distinctiveness of our work.
Further details are provided in the response to Reviewer 1oDX.
**A5:** While our current work assumes the convexity of the loss function, many theoretical results in graphical models use convex objectives. However, proximal gradient methods can converge even in non-convex settings, validating their robustness and flexibility. Future work aims to enhance the applicability and robustness of our framework to broader practical scenarios, including non-convex settings.
Further details are provided in the response to Reviewer fakh.
Pdf: /pdf/57c890112781b4d23ee1fe294dac41280ba06ff2.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Stabilized Proximal-Point Methods for Federated Optimization | Accept (spotlight) | Summary: The authors of the paper extend the DANE algorithm to the stabilized-DANE (S-DANE) algorithm based on intuition from stabilized proximal point method. They also enhance the proposed S-DANE method with Monterio-Svaiter acceleration. The algorithms proposed by the authors allow for partial participation and various local solvers to some extent. Convergence analysis and experiments are provided.
Strengths: 1. The presentation of the paper is clear, logical and easy to follow.
2. The paper proposes two novel algorithms S-DANE and its accelerated version and provides convergence analysis of them, experiments are also provided to validate the claim of the paper.
3. The proposed algorithms allow for partial participation and various local solvers in some sense.
Weaknesses: 1. The partial client participation case of S-DANE and accelerated S-DANE both relies on the bounded gradient dissimilarity assumption, which is violated in many cases, and is not accurate enough when describing the effect of data heterogeneity. This raises concerns of the reviewer whether or not the algorithm is practical in the case of partial client participation.
2. The paper wants to address the communication bottleneck in FL setting by reducing the total number of communication rounds the algorithm needed to reach a certain accuracy level. However, it is also important to take into account the number of bits transferred from the client to the server as they are often constrained by the bandwidth. In fact, the terms communication complexity/efficiency appeared in the paper are quite misleading.
3. As the authors have suggested, the analysis is carried out in the case where each local objective $f_i$ is $\mu$-strongly convex, which is a little restrictive.
4. There are some typos (e.g., Line 265).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In Table 1, why do the authors compare the number of communication rounds rather than the total amount of communication for each algorithm? Does all of those algorithms have the same communication overhead in each communication round? If the reviewer is not mistaken, some algorithms (such as SVRP) in the table work in the single client setting, is it fair to compare them directly to S-DANE in the full client participation setting?
2. For S-DANE (and its accelerated version), how to determine the proper stepsize? In the convergence guarantee, $\gamma$ depends on the unknown average Hessian dissimilarity constant $\bar{\delta}_s$ and the bounded gradient diversity constant $\zeta$. Could the author illustrate on how the two constants are estimated?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the great evaluation of our paper. Your every comment is important to us. We did our best to
understand and reply to your constructive feedback as follows:
> W1
Thanks for the great question. Here are some justifications.
1. Interestingly, we do not use any smoothness-related assumptions to prove the communication complexity. Therefore, the function classes also include Lipschitz and non-smooth functions. In this case, we believe using Definition 4 is reasonable.
2. If we further assume that the function $f$ is $L$-smooth, then we agree that Definition 4 can be relaxed. For instance, one can consider $\frac{1}{n}\sum_{i=1}^n ||\nabla f_i (x) - \nabla f(x)||^2 \le \zeta^2 + \beta ||\nabla f(x)||^2 \le \zeta^2 + 2\beta L (f(x) - f^\star)$ with $\beta > 0$, which captures the growth behaviour of the dissimilarity. The only change in the proof will appear in line 589 where we have an additional error term which can be canceled by choosing small $\gamma$ and large $\lambda$ that depend also on $\beta$ and $L$.
3. The interesting regime of considering partial client participation is when the number of clients $n$ is potentially very large. Then Definition 4 allows almost unbounded distribution of the gradient dissimilarity (similar to the notion of 'stochastic gradient noise' for centralized SGD). Therefore, we believe using this assumption is still reasonable in practice. More detailed discussions can be found in Section 2 from [22] where the same assumption is used.
> W2 \& Q1
Thanks for this nice question! We refer to the attached pdf for the clarification of your concerns. Please let us know if there is anything unclear. We will add the discussions to our main manuscript.
> W3
Yes! However, note that our analysis works for standard convex functions ($\mu = 0$) as well. In the strongly convex case, when the function is of the form $f(x) = \frac{1}{n} \sum_{i = 1}^n \phi_i(x) + \frac{\mu}{2} || x ||^2$, we can rewrite it as $f(x) = \frac{1}{n} \sum_{i=1}^n f_i(x)$ with $\mu$-strongly convex $f_i(x) = \phi_i(x) + \frac{\mu}{2} || x ||^2$.
In practice, the strong convexity often comes from adding explicit regularization of the local convex loss function (e.g. regularized logistic regression). Then each function typically is strongly-convex and $\mu$ is often known.
We can actually propose the version of our method for composite functions $F(x) = f(x) + \psi(x)$ for which such an artificial split would be unnecessary but we do not do it for simplicity.
> W4: typos
Thanks for finding the typos!
> Q2
Thanks for asking this important question. We refer to the section of 'adaptive S-DANE with line-search' in the attached pdf for answering this question.
We thank you again for your great review! If you agree that we managed to address all issues, please consider raising your score–this would totally boost our chance. If you believe this is not the case, please let us know and we will try our best to answer your every question!
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. The rebuttal has addressed most of my concerns, therefore I decide to raise my score.
---
Reply to Comment 1.1.1:
Comment: Many thanks for the support! | Summary: An algorithm for distributed convex optimization with partial participation is proposed, under a similarity assumption.
Strengths: The proposed algorithm has complexity Acc-S-DANE has claimed communication complexity O(sqrt(delta/mu)log(1/epsilon), for the first time. (I did not check the details of the proof but the main lines look correct to me).
Weaknesses: * I want you to discuss your statement "Suppose we use the standard gradient descent as a local solver, then the number of gradient steps required to solve the subproblem increases across the iterations k". I don't think this is correct. In fact, you should discuss the 5GCS algorithm in Grudzien et al. "Can 5th Generation Local Training Methods Support Client Sampling? Yes!" AISTATS 2023. 5GCS is essentially a Point-SAGA algorithm with inexact computation of the proximity operators, and different strategies to solve these subproblems are discussed. The number of GD steps does not increase, you just pay a log(L/mu) factor for that in the complexity.
* You need to provide 2 tables: Table 1 in the full participation case, and a second table on existing algorithms for partial participation of $s$ clients. In particular, 5GCS and TAMUNA in "TAMUNA: Doubly Accelerated Federated Learning with Local Training, Compression, and Partial Participation" arXiv:2302.09832, 2023 have communication complexity $O\left(\sqrt{\frac{nL}{s\mu}}+\frac{n}{s}\right)\log(1/\epsilon)$.
* The complexity of Catalyzed SVRP reported in Table 1 is not correct. This value is for a different communication model, which counts "exchanging a vector between the server and one of the clients as a single communication step". So, if $s$ clients communicate, the complexity is multiplied by $s$, and $s=1$ gives the best complexity in this sense, since asynchronicity is encouraged. This is clearly different from measuring the number of communication rounds, in which $s<n$ *worsens* the complexity.
* You should discuss the paper Beznosikov et al. "Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities", arXiv:2302.07615
* You should compare in your experiments to some of the methods mentioned above, in the two heterogeneous (delta = L) and homogeneous (delta<<L) regimes: does your method improve on them by exploiting the similarity, whereas other methods do not? (it might be that an algorithm also benefits from similarity, but has just not been studied under the similarity assumption in the literature).
* Definition 4 is very restricting, this means that the $f_i$ are all the same up to a linear difference. This is an auxiliary result but it is not very useful in practice.
typo in several places : mu-convex -> mu-strongly convex
The paper [4] has been published in ICLR.
Technical Quality: 3
Clarity: 3
Questions for Authors: Do you assume that $\delta\geq \mu$? Because a complexity of $\sqrt{\delta/\mu}$ is a strong statement, it means that the algorithm converges in 1 round if $\delta \rightarrow 0$. FedRed, for instance, has $(\delta+\mu)/\mu$. Please check.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the great evaluation of our paper. Your every comment is important to us. We did our best to
understand and reply to your constructive feedback as follows:
> W1
Thanks for the nice question.
1. This statement only (appears and) refers to the accuracy condition written in line 124. To satisfy this condition, if using GD, then the number of steps should depend on $k$, which is correct.
2. This condition is only stated for the original proximal point method (eq 2). It is well-known that the local computation has an unnecessary logarithmic dependency on either k or the final accuracy for this method, which can be improved by using more advanced variants [1,2,3].
3. 5GCS is different from the original proximal point method when $n=1$. In Table 1, many previous works (with proximal-point steps) also do not require increasing the number of GD steps, such as AccSVRS. These facts do not contradict this statement because these are different methods.
4. Acc-S-DANE is strictly better than 5GCS at least in the full-client participation settings. **1)** the communication complexity depends on $\delta_A$ instead of $L$ for 5GCS **2)** Even if $\delta_A = L$, the number of local steps is 1 instead of $\tilde{O}( \sqrt{L/\mu} )$ for 5GCS. 3) we prove convergence for the function value which is stronger than the squared distance for 5GCS. 4) it works well when $\mu \to 0$ while 5GCS seems not well-defined when $\mu = 0$.
Nevertheless, 5GCS is an excellent method in some settings and we will add more discussion on it.
[1] Svaiter B.F. and Solodov, M.V. A hybrid projection-proximal point algorithm. Journal of Convex Analysis. 1999.
[2] Carmon, Yair, et al. Recapp: Crafting a more efficient catalyst for convex optimization. ICML 2022.
[3] Ivanova, Anastasiya, et al. Adaptive Catalyst for Smooth Convex Optimization. International Conference on Optimization and Applications. 2021.
> W2
Thanks for the suggestion! However, it might be difficult to compare these two methods directly with our methods and others, as different settings are considered: for instance, 1) we consider cross-device setting where $n$ might be very large and each client is stateless (no control variates are stored), and hence SAGA-type methods are not applicable 2) certain first-order dissimilarity condition has to be assumed in this setting 3) as a result, our rates do not have a dependency on $\frac{n}{s}$ but instead on $\zeta^2$.
We will add the rates and discussions of these two excellent methods in our manuscript.
> W3
Thanks for the great question. The complexity is correct. In Table 1, we report the number of communication rounds for all the methods including SVRP (for which the number of rounds is in the same order as the total number of vectors transmitted). We refer to the attached PDF where the settings and how SVRP works are explained in detail. The main metric of SVRP/SVRS is different and we separated these two algorithms in Table 1 and added Remark 6.
> W4
Thanks for the reference. This is a strong work. However, 1) The main target is the same as SVRP/SVRS, i.e. minimizing the total number of bits transmitted across rounds, which is different from this work. 2) It does not consider acceleration (while AccSVRS does) but uses compression to reduce the bits (while AccSVRS does not) and 3) in the end, it requires $\tilde{O}( n + \delta \sqrt{n} / \mu )$ communication rounds scaling with $n$.
> W5
Thanks for the interesting suggestion. The paper is based on the existing theoretical results. Studying if certain previous algorithms can exploit similarity is a bit irrelevant to the focus of this paper, which could instead be an interesting future work.
Also in many cases, by tuning hyperparameters, some methods can outperform their theoretical upper bounds. For instance, Scaffnew and Scaffold have a similar performance by tuning stepsize while Scaffold has not been proven to achieve acceleration. But as you suggested, we will try to see if some of your mentioned methods can benefit from the similarity in our toy example.
> W6
1. Interestingly, we do not use any smoothness-related assumptions to prove the communication complexity.
Definition 4 says the difference function is Lipschitz, and the original functions can be non-smooth.
2. Lipschitz functions are not just linear functions. The log-sum-exp is not linear but Lipschitz.
3. If we further assume that the function $f$ is $L$-smooth, then we agree that Definition 4 can be relaxed. For instance, one can consider $\frac{1}{n}\sum_{i=1}^n ||\nabla f_i (x) - \nabla f(x)||^2 \le \zeta^2 + \beta ||\nabla f(x)||^2 \le \zeta^2 + 2\beta L (f(x) - f^\star)$ with $\beta > 0$, which captures the growth behaviour of the dissimilarity. The only change in the proof will appear in line 589 where we have an additional error term which can be canceled by choosing small $\gamma$ and large $\lambda$ that depend also on $\beta$ and $L$.
4. The interesting regime of considering partial client participation is when the number of clients $n$ is potentially very large. Then Definition 4 allows an almost unbounded distribution of the gradient dissimilarity (similar to the notion of 'stochastic gradient noise' for centralized SGD). Therefore, we believe using this assumption is still reasonable in practice. More detailed discussions can be found in Section 2 from [22] where the same assumption is used.
> Typos and reference
Thanks for finding the typos and we will update this reference.
> Q1
Many thanks. Yes, in Table 1, we reported the interesting regime $\mu \le \Theta(\delta)$ and we will add it to the text.
We thank you for your great review and appreciate your help in improving the paper! If you agree that we managed to address your concerns, please consider raising your score–this would totally boost our chance. If you believe this is not the case, please let us know and we will try our best to answer your every question!
---
Rebuttal Comment 1.1:
Comment: Thank you for replying to the points raised. Assuming you will make the appropriate changes in the paper, I think it can now be accepted, so I am raising my score to 7.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your trust and your support. | Summary: The paper introduces a stabilized version of DANE (S-DANE). It replaces the proximal point step with an extragradient-type step. With the well-designed subproblem criterion, the number of local gradient oracle queries improves over DANE in logarithmic terms. It further combines Monteiro-Svaiter acceleration with S-DANE, which leads to the best communication complexity. The paper also considers partial client participation.
Strengths: a) S-DANE eliminates the logarithmic term present in DANE.
b) Accelerated S-DANE achieves a communication complexity of \(O(\sqrt{\delta_A/\mu \log(1/\epsilon)})\), which surpasses the complexities in the existing literature.
c) The clarity of the manuscript's writing is good.
Weaknesses: See "Questions".
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Apart from its resemblance to the extragradient method when \(\mu = 0\), what is the underlying motivation for considering equation (3)?
2. Regarding line 129 on page 4, the statement "such computation overhead cannot be avoided" seems imprecise. In the strongly convex case presented in [35] and both the convex and strongly convex cases described by Lan et al., 2023, the number of gradient steps required to solve the subproblem does not necessarily increase with \(k\).
Lan, G., & Li, Y. (2023). A Novel Catalyst Scheme for Stochastic Minimax Optimization. arXiv preprint arXiv:2311.02814.
3. Some suggested references for extragradient:
Korpelevich, G. M. (1976). The extragradient method for finding saddle points and other problems. Matecon, 12, 747-756.
Nemirovski, A. (2004). Prox-method with rate of convergence O (1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM Journal on Optimization, 15(1), 229-251.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the great evaluation and the support of our paper!
> Q1
Thanks for the interesting question. The geometric meaning of equation (3) can be found in [1]. Previously, we derived this equation directly from the proof. Since we want to have the one-step recurrence of the form:
$$ a\bigl( f(x^{r+1}) - f(x^\star) \bigr) + \frac{1+\mu a}{2} ||v^{r+1} - x^\star||^2 \le \frac{1}{2} ||v^r - x^\star||^2 , $$
then we can first use convexity at the beginning of the proof:
$$ a f(x^\star) + \frac{1}{2} ||v^r - x^\star||^2 \ge a \bigl( f(x^{r+1}) + \langle \nabla f(x^{r+1}), x^\star - x^{r+1}\rangle + \frac{\mu}{2} ||x^{r+1} - x^\star||^2 \bigr) + \frac{1}{2} ||v^r - x^\star||^2 . $$
Then it is natural to set $v^{r+1}$ to be the minimizer of the right expression in $x^\star$, which is $(a\mu + 1)$- strongly convex. After that, we can approximately get the main recurrence.
[1] Svaiter B.F. and Solodov, M.V. A hybrid projection-proximal point algorithm. Journal of Convex Analysis. 1999.
> Q2
Yes, this statement is wrong for accelerated proximal-point methods. Indeed, apart from your references, we also found several adaptive catalyst frameworks [2,3] that successfully remove the logarithmic dependency on $k$. Many thanks for pointing this out and providing the important reference.
[2] Carmon, Yair, et al. Recapp: Crafting a more efficient catalyst for convex optimization. ICML 2022.
[3] Ivanova, Anastasiya, et al. Adaptive Catalyst for Smooth Convex Optimization. International Conference on Optimization and Applications. 2021.
> Q3
We appreciate the reviewer for providing these nice references and we will add them to our manuscript!
We thank the reviewer again for your great review and appreciate your help in improving the paper!
---
Rebuttal Comment 1.1:
Comment: In the experiment involving deep learning, the control variable is omitted. Is this to avoid the need for communication when solving the subproblem? Additionally, when option 2 is applied in Algorithm 3, what is the approximate communication complexity? I noted in a discussion with another reviewer that theoretical analysis might be available for this case.
---
Reply to Comment 1.1.1:
Comment: Thanks for the question.
The main purpose of using option II for some experiments is not to avoid the need for extra communications to obtain the vector $\nabla f_{S^r}(v^r)$ (the clients still need to communicate with the server to get $v^r$ and exchange $x_{i,r+1}$ and $\nabla f_i (x_{i,r+1})$.) Somehow for different deep learning tasks, option I and option II (using or not using control variates) behave differently, For this particular experiment of using ResNet, algorithms without adding control variates often perform better, But for language tasks, using control variates is often better.
The rate of Algorithm 3 with option 2 and $s=n$ is similar to Algorithm 1 with $s=1$. By picking $\lambda = \frac{\zeta^2}{\epsilon}$, we get a deterministic rate of:
$$
f(\bar{x}^R) - f^\star \le \frac{\mu}{2 [(\frac{\mu}{\lambda}+1)^R - 1]} ||x^0 - x^\star||^2 + \frac{\epsilon}{2} .
$$ | Summary: This paper considers the problem of distributed optimization under second-order similarity under (strong) convexity and smoothness. The paper proposes a new algorithm, Stabilized DANE, which (a) matches the best-known communication complexity under Hessian similarity while (b) requiring that local computation problems are solved only up to an approximately constant accuracy (i.e. not an accuracy that depends polynomially on $1/\epsilon$ where $\epsilon$ is the desired solution accuracy). The authors also consider partial participation, where only a subset of the clients is available at any given time, and also provide an accelerated version of Stabilized DANE.
Strengths: - The paper is written clearly and the new algorithms are well-motivated. The proofs are easy to follow.
- The stabilization technique is elegant and clearly obtains both theoretical improvements (mostly in local computation complexity, as far as I can tell) and in practice (as shown in Section 5).
- The algorithms developed build on DANE, which is well-known and already a strong algorithm.
Weaknesses: 1. "It is necessary to assume a certain level of dissimilarity" (lines 208-212), I'm not actually sure that's _necessary_. There are certainly upper bounds without this assumption. Why is it necessary? Relaxed assumptions like expected smoothness are often enough even in this setting.
2. The complexities given for Acc-SVRS/Catalyzed SVRP in Table 1 are not consistent with how the other rates are presented in this work-- the $n$ factors should not be there, since in Table 1 for every other work full participation is assumed anyway (i.e. they should all have an $n$ factor multiplied). Which brings me to another point: What is the advantage of S-DANE over SVRS/SVRP? Is it only the higher efficiency of local computation?
3. The paper right now has a few different settings (full participation, partial participation) that contrast to other settings in prior work. Remark 6 attempts to explain the difference but I think this should be explained more clearly and placed in the main body of the paper.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Please address my concerns in the weaknesses section.
- Is the hessian similarity actually smaller than the smoothness over the training trajectory in the CIFAR10 optimization task?
- Can you derive theory for the variant of your method with no control variates that you use in the deep learning experiment? Does it just reduce to stabilized prox then?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the great evaluation of our paper! Your every comment is important to us. We did our best to
understand and reply to your constructive feedback as follows:
> W1
Yes, this sentence is confusing! We will rewrite it.
- This sentence (that was first written in [22]) particularly refers to the setting where $n$ might go to infinity and each client does not store any vectors on its device (what people call 'stateless'). Then this sentence says to prove convergence, we need to assume a certain level of dissimilarity, which is correct. This sentence does not necessarily refer to using Definition 4.
- When $n$ is sufficiently small, then we can use SAGA-type methods (considered in Scaffold) that ask each device to store certain control variates. Then yes, no dissimilarity assumption is required. Meanwhile, we need to assume the $L$-smoothness of the function and the final complexity depends on $L$.
- In this paper, we consider the same setting as in paper [22] (with potentially very large $n$). Then as you mentioned, the question is if using Definition 4 is too strong. Interestingly, we do not use any smoothness-related assumptions to prove the communication complexity. Therefore, the function classes also include Lipschitz and non-smooth functions. In this case, we believe using Definition 4 is necessary.
- However, as you mentioned, if we further assume that the function $f$ is $L$-smooth, then Definition 4 can exactly be relaxed. For instance, one can consider $\frac{1}{n}\sum_{i=1}^n ||\nabla f_i (x) - \nabla f(x)||^2 \le \zeta^2 + \beta ||\nabla f(x)||^2 \le \zeta^2 + 2\beta L (f(x) - f^\star)$ with $\beta > 0$, which captures the growth behaviour of the dissimilarity. The only change in the proof will appear in line 589 where we have an additional error term which can be canceled by choosing small $\gamma$ and large $\lambda$ that depend also on $\beta$ and $L$.
We will make this sentence clear and add more discussions to the manuscript according to your suggestion!
> W2 \& W3
Thanks for raising these points! We refer to the attached PDF for clarification of your concerns. (In Table 1, we report the number of communication rounds for all the methods including SVRP for which the number of rounds is in the same order as the total number of vectors transmitted.) Please let us know if there is anything unclear. We will move these discussions to the main paper as you suggested!
> Q2
This is a very interesting question. From our experiments, we saw that the estimated dissimilarity quantity defined as in Figure 3 ranges from $10^{-4}$ to $10^{-3}$ in the CIFAR10 task (and we choose $\lambda = 10^{-3}$ and the local learning rate $10^{-1}$ for S-DANE). Unfortunately, we did not record the local estimated smoothness quantity for our experiments. Empirically studying the relation between Hessian similarity and smoothness in different deep-learning tasks with various NN structures is indeed a very interesting future direction.
> Q3
Yes! The proof is similar to the one for S-DANE with $1$ client participation. Let us compare Algorithm 3 with $s= n$ and Algorithm 1 with $s = 1$. The main difference is that the former considers deterministic averaging and the latter considers sampling (in the proof, we need to take expectation for the latter which is similar to the deterministic averaging for the former). Therefore, if we study Algorithm 3, we have to similarly assume a certain first-order dissimilarity condition. We are not fully sure what you mean by stabilized prox. But yes, it can be seen as stabilized FedProx.
We thank you again for your great review and we appreciate your support! If you agree that we managed to address all issues, please consider slightly raising your score–this would totally boost our chance. If you believe this is not the case, please let us know and we will try our best to answer your every question!
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response.
1. "In Table 1, we report the number of communication rounds for all the methods including SVRP for which the number of rounds is in the same order as the total number of vectors transmitted" I have read your note, and I am not sure I agree with the statement that the number of transmitted vectors / round is the most important variable, this very much depends on the network type. In any case, one way of still communicating this information is to indicate in an additional column, explicitly, how many model-size vectors are communicated per communication round. As far as I understand, for your method, this would be "n" or "s" while for SVRP/SVRS it would be 2, on average (n comm per epoch + n comms at the end of each epoch). I think the table, as it exists currently, does not provide the full picture to the reader about the trade-offs between those different algorithms.
2. If the paper is accepted, can you redo the experiments and add the plots showing the local dissimilarity over the training trajectory as compared to the smoothness?
3. I use the term "stabilized prox" to refer to the method given by eqn (3) in your work, with $F_k$ being chosen stochastically.
---
Reply to Comment 1.1.1:
Comment: > 1.
Thanks for the great suggestion! We will add one more column to specify the number of vectors communicated per round to Table 1. For our method, it is $n$. For SVRP, it would be 1 with probability $1 - \frac{1}{n}$ and $n$ with probability $\frac{1}{n}$. Yes, this would provide a better picture for the reader about the trade-offs between these algorithms.
>2.
Yes, we will add this comparison.
>3.
Thanks for the clarification. The stochastic version of eqn (3) is the same as S-DANE with $s=1$, which is also equivalent to Algorithm 3 with $s=1$.
Thanks for your quick response and your constant help in improving the paper. | Rebuttal 1:
Rebuttal: We thank all reviewers for their constructive evaluations of our manuscript and we appreciate all the help from the reviewers for improving the paper.
In this work, we aim to develop federated optimization algorithms that 1) minimize the number of required communication rounds to reach the desired accuracy and 2) achieve high efficiency in local computation. These two represent central metrics in federated contexts, given the high cost of establishing connections between the server and the clients.
Specifically, we:
- developed novel algorithms: S-DANE (basic version) and Acc-S-DANE (accelerated version), which achieve the best-known communication complexity (in terms of the number of rounds) and local computation efficiency among all existing basic and accelerated methods (in the full client participation setting). This is achieved by using a more stabilized prox-center in the proximal step.
- further provided auxiliary results about partial client participation and using arbitrary stochastic local solvers, making them attractive in practice.
- provided a simple analysis for both algorithms. We derive convergence estimates that are continuous in the strong convexity parameter $\mu$.
Three reviewers asked about the setting and the SVRP algorithm. We refer to the attached PDF for clarifications.
After submission, we realize the following improvements can be made:
- The proof can be much simplified. We can completely remove the sequence $\\{ v_{i,r+1} \\}$ and only keep $\\{ v^{r+1} \\}$.
- We found an important reference [1] which we will include in the related work section. Our proposed method (S-DANE) recovers [1] for the special case of $n=1$. The paper [1] also contains a geometric explanation of why the stabilized proximal-point method has better local efficiency which could be of interest to the readers of our paper.
- Both of our algorithms can be made fully adaptive by using line-search (in the full-client participation setting). The details can be found in the attached PDF. This is, as far as we know, the first result of adaptive hyper-parameter tuning in the setting of exploiting second-order dissimilarity.
We anticipate an interactive discussion with you, and we will be most happy to answer any remaining questions.
[1] Svaiter B.F. and Solodov, M.V. A hybrid projection-proximal point algorithm. Journal of Convex Analysis, 6 (1):59–70, 1999.
Pdf: /pdf/d64e7aff54cb3bf1c92f461288e95d69459dcc58.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper introduces a new variant of the existing DANE algorithm for federated learning. The paper integrates the stabilized proximal point method into DANE to form S-DANE. Convergence analyses are provided showing that S-DANE has the same rate as DANE but with better dependency on the communication round. The accelerated variant of S-DANE is also proposed which achieves better rates than S-DANE. Numerical experiments are able to show the advantage of S-DANE and Accelerated S-DANE compared to existing works.
Strengths: - The related work discussion of the paper is good. I am able to see how the paper is positioned among existing works.
- I have not seen stabilized proximal update applied to federated learning (FL) so the idea in the paper appears to be new for FL.
- S-DANE and accelerated S-DANE are able to achieve the best-known rate for their corresponding setting.
- The claims are supported by theoretical analyses.
- The convergence results are done under both full and partial client participations.
- Extensive experiments showing the advantage of S-DANE/Accelerated S-DANE.
Weaknesses: - The design of the algorithm is a little bit unclear, see questions below.
- Somehow I only see the full 8 algorithm in the left-most plot in Figure 2, while in the remaining 3 there are only about 4 algorithms displayed in the plots, i.e. I cannot see Snaffnew, AccGradSliding, DANE-GD, GD.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In both algorithm 1 and algorithm 2, aseach device $i \in S_r$ performs the update in parallel, I do not see how $\nabla f_{S_{r}} (v^r)$ or $\nabla f_{S_{r}} (y^r)$ is computed since it requires the evaluation of the gradient of $f_j$ for $j \in S_r, j\neq i$?
2. In Figure 4, Scaffnew actually performs worse than Scaffold, this is somewhat surprising given the experiments in Mishchenko et al. (2022). What is your explanation on this?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The paper adequately discusses the limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the great evaluation of our paper. Your every comment is important to us. We did our best to
understand and reply to your constructive feedback as follows:
> W1 \& Q1
Thanks for the question. The description can be found from line 158 to 163. Specifically, during each communication round $r$, the server first sends $v^r$ to the sampled clients. Then each client $i \in S_r$ computes $\nabla f_i(v^r)$ and sends this vector back to the server. Afterwards, the server computes $\nabla f_{S_r}(v^r) = \frac{1}{s}\sum_{i \in S_r} \nabla f_i(v^r)$ and sends $\nabla f_{S_r} (v^r)$ back to these clients. The same procedure was written in Section 4 in the Scaffold paper [1].
> W2
Thanks for the great observation. The remaining 3 plots are about partial-client participation. Indeed, we do not plot these four algorithms as partial-client participation is not considered in their original papers. We will make it clear in the manuscript.
> Q2
Thanks for the great observation. Note that Scaffold has two choices of control variates (cv). The cvs between Scaffnew and Scaffold are different in our experiments. For Scaffold, we use the same cv as DANE (option I in [1]) while for Scaffnew the cv is similar to option II in [1].
1. In the Scaffnew paper, they use the same cv for both Scaffnew and Scaffold. This is why they perform similarly (with tuned parameters).
2. So far, no results have shown that the cv for Scaffnew can achieve communication reduction under second-order similarity. However, the other cv (option I) can exploit similarity [2]. This is perhaps why we can observe a better performance of Scaffold in certain experiments.
3. Figure 4 is about deep learning. We expect that the cv for Scaffnew {option II} can be slightly less stable than another cv (option I).
[1] Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. Scaffold: Stochastic controlled averaging for federated learning. ICML 2020.
[2] Xiaowen Jiang, Anton Rodomanov, and Sebastian U Stich. Federated optimization with doubly regularized drift correction. ICML 2024.
We thank you again for your great review! If you agree that we managed to address all issues, please consider raising your score–this would totally boost our chance. If you believe this is not the case, please let us know and we will try our best to answer your every question!
---
Rebuttal Comment 1.1:
Title: Responses to authors
Comment: I thank the authors for the responses. I believe my concerns have been fully addressed. I have adjusted the score. If the paper gets accepted, I would like these responses to appear in the revised version as well.
---
Reply to Comment 1.1.1:
Comment: Yes! Many thanks for the support. | null | null | null | null | null | null |
Strategic Multi-Armed Bandit Problems Under Debt-Free Reporting | Accept (poster) | Summary: This paper considers a strategic variant of the multi-armed bandit problem with payments. It thereby builds upon a problem studied by Braverman et al. The paper formally introduces the problem formulation and proposes an algorithm, S-SE, that combines successive elimination with a meticulously chosen payment rule. It is shown that truthfulness is a dominant SPE under which the proposed algorithm suffers logarithmic regret. The paper also analyzes the utility of the arms under S-SE and the dominant SPE. Finally, the performance of the algorithm under arbitrary arm strategies is analyzed.
Strengths: - The studied problem, which is at the intersection of bandit learning and mechanism design, is very interesting.
- The paper is well-written and easy to follow.
- It is interesting to see that the regret achieved by Braverman et al. can be improved using an extension of successive elimination + payment rule (under debt-free reporting).
- The paper proves that truthfulness is a *dominant* SPE.
- The provided regret bound appears to be near-optimal and the dependence on the gaps $\Delta_{1k}$ and $\Delta_{2k}$ is quite interesting.
- Overall, even though many aspects of this problem have already been studied by Braverman et al., I believe that this papers makes some insightful and novel contributions.
Weaknesses: - The paragraph **Tightness of Regret** is a very hand-wavy. In strategic settings like the one you study here, one easily mistakes an intuition about "rational agent behavior" for a rigorous argument. In particular, you write
"The optimal arm only needs to report a slightly higher value than the second-best arm to be chosen by the player as the superior option [*you here assume a class of mechanisms that do in fact choose the arm with highest reported value in some specific way*]. Consequently, **all efficient low-regret mechanisms** must ensure truthfulness, at least for the two best arms."
Intuitively, yes, I agree. However, this is a claim that, I think, is very difficult to prove rigorously. I recommend to adapt the language used in this paragraph and emphasize that you're providing intuition only.
- Similar hand-wavy language and statements are used a few times in the text (not as bad as the above one). Please make sure to be clear whether you provide intuition (which is great) or you actually proved the statement (even better).
- A minor thing that is also about the use of slightly imprecise language and might come across as nitpicking. Lemma 14 in Braverman et al. actually shows that you cannot achieve $(\alpha \mu_1 + (1-\alpha) \mu_2)T$ utility for *constant* $\alpha > 0$. Hence, you can achieve more than $\mu_2 T$ revenue, e.g., let $\alpha = 1/\sqrt{T}$, despite you saying in line 126 that "no mechanism can guarantee more than $\mu_2 T$ revenue". This is not a serious issue, since I think we can all agree that defining regret w.r.t. $\mu_2 T$ is still the right choice in view of Lemma 14. I still wanted to make you aware of this.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Line 269: You say that the best arm is incentivized with a bonus of $O(\Delta_{12} T)$. Is this a typo? If the bonus payment would be of order $T$, then your regret bound would be of order $T$. It also doesn't seem necessary to make such a large bonus payment, since you'd only need to compensate the best arm for its truthful reporting in the first $\tau$ rounds.
**Minor things:**
- Consider splitting the text in Section 2 into several paragraphs to improve readability. Also, clearly highlighting the observational model and assumptions using \paragraph or some kind of \emph would be helpful, in my opinion.
- Typos: "tof", "understrategy"
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations are adequately addressed in my opinion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer VQPm,
Thank you very much for your detailed and insightful review. We greatly appreciate the time and effort you dedicated to it.
We are computing regret with reference to $\mu_{2}$, which involves incentivizing the best arm with $O(T \Delta_{12})$. This approach is not harmful, as it offsets the difference between $\mu_{1}$ and $\mu_{2}$ while preserving $\mu_{2}$ for the player. Essentially, we allocate all values greater than $\mu_{2}$ to the best arm throughout the game. Additionally, the incentive should address the second phase. If not properly incentivized during this phase, the best arm might withhold a portion of the reward (especially since the second-best mean is not communicated), leading to reported values that are less than $\mu_{2}$ on average. This could result in linear regret, given that the second phase lasts much longer than the first. Thus, incentivizing the best arm with $O(T \Delta_{12})$ appears necessary (as also done by Braverman et al.), which is acceptable given the definition of regret. However, improvements have been achieved through the new bonus structure assigned to suboptimal arms, which has been optimized due to the adaptiveness of the new algorithm.
We appreciate your feedback on the weaknesses in our representations, language, and typos. We will address these issues and make the necessary corrections in the final version.
We hope that we have addressed your concerns and answered your questions clearly, contributing positively to your evaluation.
---
Rebuttal Comment 1.1:
Comment: Thanks for clarifying. When I wrote the questions I forgot that your benchmark is the second best arm.
I decide to keep my original score with the primary reason being that, even though the paper makes some novel contributions, these are to some degree incremental. I'm still in favor of acceptance.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your valuable feedback and the insightful discussion. | Summary: Multi-armed bandit problems capture explore-exploit scenarios under different reward structures including stochastic and adversarial. In this paper, the authors consider bandit problems where the arms report rewards strategically. To tackle such a problem, the authors devise a successive elimination scheme, where arms are eliminated till the best arm is identified (with high probability). To encourage truthful reporting, the authors devise a bonus scheme, so that the best arm is pulled unless it dips below the second-best arm, which encourages truthful reporting. In scenarios where arms fail to report truthfully, the authors develop regret bounds detailing the performance of algorithms.
Strengths: 1. **Problem Formulation is Interesting** - The authors investigate debt-free reporting under the scenario of a bonus structure. The problem structure/formulation is interesting as it covers incentives, in the form of bandits reporting arms strategically, and the explore-exploit dilemma found across bandit works.
2. **Dominant Strategy is Detailed and Analyzed** - In Section 4, the authors characterize the dominant strategy for arms, under a certain type of bonus payoff, and show that this results in truthful reporting from arms. Such a characterization helps us better understand how no-regret algorithms can be derived and provides an important piece of information on these types of bandits.
3. **Regret Bounds in both Strategic and Non-Strategic Scenarios** - The authors derive regret bounds in both strategic and non-strategic scenarios. For non-strategic scenarios, the truthfulness of the arms makes it easier to determine optimal arm pulls, while for non-truthful scenarios, they use the upper bound on savings, M, to characterize regret. It would be nice to understand how the regret bound proved in Theorem 5.1 corresponds to some type of function of T.
4. **Bonus Structure as Payment Improves Regret Bounds** - Table 1 lists a comparison between the method proposed in their work (S-SE) and previous work (S-ETC). The two models differ in their bonuses, and they show that by modifying this, they go from a T^2/3 bound to a T^1/2 bound.
Weaknesses: 1. **Lack of Empirical Verification** - On page 8, the authors discuss the tightness of the regret bounds, and argue that while the actual regret might change, the order of the regret is roughly the same across scenarios. However, it would be interesting to understand how tight such regret bounds are across different scenarios. Under what types of scenarios are the regret bounds tight and under which are regret bounds loose? An empirical analysis, with even simulated data, would provide a nice complement to the theoretical analysis proposed in this paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could you further detail the types of scenarios or real-world regimes where such bandits exist?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The future work section discusses additional questions that could be answered, though authors should more explicitly label this as a limitations section
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 3dvn,
Thank you for your insightful feedback and for the time you dedicated to reviewing our paper.
In the current version of our paper, we did not include experimental results, which is indeed quite common in game theory, due to the fact that the dynamics of the game are intrinsically dependent on arm strategies that are generally intractable in real-world scenarios. Consequently, we provided a theoretical analysis of the dominant Subgame Perfect Equilibrium (SPE) and discussed the tightness of its regret. However, inspired by your review, we have conducted a simulated experiment where we fix the arm strategies from the beginning and track the cumulative regret. Specifically, we consider six strategic arms with $\mu_{1} > \mu_{2} > \mu_{3} > \mu_{4} > \mu_{5} > \mu_{6}$. The experiment is run over a horizon of 10,000 and averaged over 100 epochs. We examine three scenarios:
1. **Untruthful Arbitrary Reporting**: At each round, the selected arm randomly reports 100%, 60%, 40%, 10%, or 0% of its observed reward.
2. **Truthful Reporting**: This corresponds to the Dominant SPE.
3. **"Optimal" Reporting**: Here we use the term "optimal" somewhat loosely. In this scenario, only the two best arms report truthfully, while the remaining suboptimal arms report 0.
The results are detailed in the attached PDF. As expected, the worst regret corresponds to the first scenario. However, for the two remaining scenarios, both exhibit logarithmic cumulative regret, with the "optimal" reporting scenario demonstrating a better factor since the algorithm eliminates suboptimal arms more quickly and transitions faster to the second phase where it achieves better performance. While "optimal" reporting is challenging to achieve, **S-SE** effectively incentivizes arms to report truthfully, creating a dominant SPE and guaranteeing the player logarithmic regret.
Our model is inspired by several real-world applications and can be seen as a multi-agent extension of the principal-agent problem in contract theory. It deals with dynamic agency issues where a principal must select one of K agents to carry out tasks on their behalf, with the cost of these tasks remaining unknown to the principal (e.g., choosing among K contractors for a job). A crucial aspect is that the principal does not know in advance the exact cost or benefit of the actions performed. The concept of debt-free reporting of strategic arms is inspired by e-commerce transactions, where a platform may choose to cancel a sale but cannot create one. This generalizes to binary bandits, where it is easy to hide a success and report a fake failure but difficult to create a success. Additionally, it is motivated by repeated trades with budget constraints, where an arm cannot report more than it possesses.
We also agree with your suggestion regarding the relabeling of the limitations section and will make the necessary corrections.
We hope that we have addressed your concerns and answered your questions satisfactorily, contributing positively to your evaluation.
---
Rebuttal Comment 1.1:
Title: Thank you for your new experiments
Comment: Thank you for your clarifications and the new experiments. I am satisfied with these experiments and am happy to raise my score to an 8.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your valuable feedback and the insightful discussion. | Summary: The paper addresses a strategic bandit setting. Specifically, the player (algorithm) can select between K arms as in standard bandits but each arm can choose the reward it reports instead. Because of the debt-free assumptions, the reported value cannot exceed the realized reward. The papers gives an algorithm that rewards the arms based on the values they report. The regret is defined with respect to the second highest mean + the total payment given to the arms. The given algorithm can achieve logarithmic regret which is an improvement over the regret of [8].
Strengths: -The paper essentially resolves an open problem of [8] where the arms are only debt-free.
-The paper also includes an additional bound which nicely characterizes the regret under deviations.
Weaknesses: -The motivation of the paper does not feel strong. Specifically, the paper answers a small question from reference [8].
-While [8] rewards using rounds, here the reward is through payment which gives higher flexibility. Can the algorithm be generalized to allow payments through rounds instead (assuming a fixed horizon T)?
-If the player knows the minimum gap $\Delta$ can an explore then commit based algorithm similar to [8] be used to achieve log T regret?
Below are Minor representation issues:
-typo: t → $t$ in line 17
-Bullet 2 in the contribution: the regret having $\Delta_{2k}$ suggests that we should instead have $\mu_2 > \mu_3 $, i.e. strict inequality.
-There is a latex problem for algorithm 1 block, it refers to the model on page 2 instead.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please adress the above points under weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer JBtJ,
We appreciate your review and the time you've dedicated to it.
In this paper, we concentrate on strategic arms under debt-free reporting, driven by various real-world applications such as interactions on e-commerce platforms and repeated trades with budget constraints [9, 10], where reported values cannot surpass observed ones. Under the unconstrained payment setting, where arms can report any value, [8] presented an optimal mechanism with constant regret. However, when this assumption is relaxed to restricted payments (i.e., debt-free reporting), [8] proposes a mechanism that suffers from a regret of $T^{2/3}$. Hence, moving from unrestricted payment to debt-free reporting increases the regret from a constant value to $T^{2/3}$. This observation motivated us to focus on this setting.
While [8] employs additional rounds at the end of the game as incentives, we propose using payments at the end. Both types of bonuses are allocated at the end of the game, and bonuses are subtracted from player revenue and considered in the regret calculation, regardless of their nature. By using payments as incentives, defining truthful reporting as a dominant subgame perfect equilibrium (SPE) is straightforward. However, with additional rounds, we refer to a pseudo-truthful strategy since truthfulness applies only to rounds excluding the additional bonus rounds, specifically during the initial phase when the algorithm is learning the statistics of each arm. During the additional bonus rounds, which occur at the end of the game, the player ignores the values reported by the arms, as clearly stated in [8]. Given that actual payment is a common practice in typical mechanism designs for procurement auctions, we chose actual payments to maintain a more direct intuition and facilitate the presentation. In fact, a modified version of our algorithm that includes playing each arm $\frac{\Psi_k}{\mu_k}$ at the end effectively regains the payment through rounds. As already mentioned in Section 6, the improvement in regret does not depend on the type of incentives, whether they are additional bonus rounds or bonus payments. This is because bonuses are subtracted from player revenue and considered in the regret calculation, irrespective of their nature. Therefore, replacing the additional rounds in [8] with payments will not lead to an improvement in regret. The improvement is mainly due to the adaptivity of the algorithm, which allows for more tailored and less harmful bonuses. However, this adaptivity also necessitates a more technical solution concept due to the extensive nature of the game, as opposed to the simultaneous one used in [8].
If the player knows the minimum gap $\Delta$, an explore-then-commit-based algorithm similar to [8] can achieve $\log T$ regret. In fact, it suffices to explore each arm for $\log(T)/\Delta^2$ rounds instead of $T^{2/3}$. However, the purpose of bandit algorithms is to overcome the lack of this knowledge and effectively address cases where $\Delta$ is unknown.
We thank you for pointing out the representation issues, and we assure you that we will adjust the final version of our work accordingly.
We hope this addresses your concerns and contributes positively to your evaluation. | Summary: This paper studies a multi-armed bandit setting with stochastic arms but where the arms are strategic agents -- when an arm is pulled it can choose what fraction of the reward to keep for itself and what fraction of the reward to pass on to the principal (the learning algorithm picking the arms). The principal can only see the reward they receive (not the amount received by the agent), and all parties want to maximize their total reward.
This setting was originally studied by Braverman et al., who showed that if the principal runs a standard no-regret learning algorithm then there are subgame perfect “colluding” equilibria where the principal receives nothing -- on the other hand, the principal can always asymptotically achieve the second highest average payoff by essentially running a second-price auction at the beginning of the game and only pulling the winner.
This strategy proposed by Braverman et al. has two issues. One is that (naively implemented) it may require agents (the arms) to report a higher value in some early rounds than the value they actually achieve. Even if you fix this issue (by converting this idea into an explore-then-commit style algorithm), the regret of the principal -- the gap between their expected utility and actually receiving the second-highest average per round -- will be a suboptimal O(T^{2/3}).
This paper shows how to get O(sqrt(TlogT)) instance-independent and O((log T)/Delta) instance-dependent regret bounds for this problem, on par with regret bounds achievable for standard bandit problems. The main idea is to adapt the Successive Elimination bandit algorithm (and its analysis) to this strategic setting. At a high level, the algorithm runs SE until there is only arm left, and then requires this arm to contribute (on average) at least the second-highest average reward per round. Finally, all participating agents are given a bonus designed to make bidding their true value incentive compatible.
Strengths: Understanding how to adapt standard learning algorithms to strategic settings is an important question. This paper provides an improved solution to a natural strategic learning environment and presents (what I would guess to be) tight regret bounds for learning in this environment. Although the algorithm is an adaptation of a standard no-regret algorithm (successive elimination), analyzing its performance in this strategic environment is non-trivial (as it involves pinning down the worst possible equilibrium behavior of the individual arms). The paper was well-written and easy to read.
Weaknesses: I think it is a little unsatisfactory that the optimal learning algorithm still takes the form “auction off the business to the best-arm for the second-best-arm’s price”, even if this auction is now done in a slightly smoother way (via successive arm elimination over several rounds, instead of a sublinear exploration period where arms simply reveal their price). I.e., both this algorithm and the previous algorithm have the property that after some stage of the game, they will only ever select one specific arm. I think it would be a more interesting result if the principal’s strategy was more time-stable and could e.g. handle arms entering or leaving the market (or adversarial rewards, although that would require more changes to the problem set-up).
Another aspect of the solution that bothers me a little bit is that the new scheme requires bonuses that grow linearly in T. This seems to require a lot of trust on behalf of the agents that they will in fact eventually get reimbursed. In contrast, the previous algorithm of Braverman et al. also implements some bonus scheme (through rewarding the agents in some extra rounds at the end of the game), but these bonuses are sublinear in the time horizon.
Technical Quality: 4
Clarity: 3
Questions for Authors: Is it possible to modify the S-SE algorithm to work with sublinear bonuses?
Do the regret lower bounds from standard bandits carry over? (I.e., are the regret bounds in this paper tight?).
Feel free to reply to any other aspect of the review.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer byTp,
We express our sincere gratitude for your invaluable review and the time you have dedicated to it.
This paper, motivated by various real-world applications such as e-commerce and repeated trades with balanced budgets, delves into strategic arms under debt-free reporting. Our primary objective is to address an open problem in [8], which involves improving the regret under debt-free reporting, aiming to progress from $T^{2/3}$ to $\sqrt{T \log(T)}$. Therefore, we adopt the same setting as in [8], including a fixed number of strategic arms. On one hand, this setting inherently includes the scenario of arms exiting the market, equivalent to arms deciding to report 0 for all subsequent steps. On the other hand, accommodating arms entering the market would necessitate significant changes to the setting (compared to [8]) and would require introducing concepts of adversarial rewarding or even non-stationary systems. While we agree that this is a fascinating proposal, we demonstrate that even within the same setting as [8], there are substantial improvements (such as a more suitable incentive-compatible algorithm, better/tighter regret bounds, robustness to small deviations, etc.), which form the core of this paper. We assure you that the notion of a time-variant market will be included in the future work section, as we recognize that it will motivate further significant research.
In the strategic setting, under the most unfavorable circumstances, it is impossible to guarantee gains surpassing $\mu_2$. This limitation arises because the optimal arm only needs to report a marginally higher value than the second-best arm to be chosen as the superior option by the player, as formally presented in Lemma 14 of [8]. Therefore, providing a bonus of $O(T \Delta_{12})$ to the best arm to ensure truthful reporting remains optimal for the player, as it guarantees a return of $\mu_{2}$ each time the best arm is played. Generally speaking, the harmful bonus in the strategic case is the one subtracted from $\mu_{2}$, i.e., the ones paid to the suboptimal arms, which is sublinear in our case (on the order of $\log(T) / \Delta$). As a side note, even [8] rewards the best arms with $O(T \Delta_{12})$ (refer to Mechanism 2, Line 3), which is entirely acceptable since it does not contribute to the regret.
In the paragraph "Tightness of Regret" (line 271 of our paper), we discussed that the upper bound on the regret is tight and that the lower bound is of the same order as the result given in (12).
Additionally, we want to highlight the contributions of our paper. There is a notable technicality worth mentioning regarding the definition and proof of our solution concept. Our setting involves an extensive game, as opposed to a simultaneous one (as in Braverman et al., 2019, where learning occurs once at the beginning). In our extensive game, the equilibrium strategy of each arm at each node must consider the updated history of the entire game up to that point. This task is generally intractable (Ben-Porath, 1990; Conitzer and Sandholm, 2002). Interestingly, we demonstrate that our mechanism ensures truthfulness as the best response to adversarial strategies independent of history. Our approach entails consistent truthful reporting, regardless of the equilibrium path. Notably, this remains an equilibrium irrespective of past events, resembling sub-game perfect equilibrium. While the adaptivity of our algorithm introduces a more complicated solution concept for the analysis, it is crucial for the improvement in regret that we present. This adaptivity necessitates well-tailored and less harmful incentivizing bonuses, which are key to better regret. The paper also includes an additional bound which nicely characterizes the regret under deviations giving an idea about the robustness of the algorithm.
We appreciate your feedback and assure you that this discussion will be incorporated into the final version of our work. We hope this addresses your concerns and contributes positively to your evaluation, leading to a better score.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. After reading the response and other reviews, I have decided to increase my score slightly.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your valuable feedback and the insightful discussion. | Rebuttal 1:
Rebuttal: Here we present the complementary experimental results to our theoretical analysis, as suggested by Reviewer 3dvn.
Pdf: /pdf/973db4ab320d286e19573c8cd4e83c3bfae0a674.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The Prevalence of Neural Collapse in Neural Multivariate Regression | Accept (poster) | Summary: The paper introduces a novel extension of neural collapse to neural regression collapse, demonstrating that a similar phenomenon exists in multivariate regression. It treats the last layer feature vectors as free variables when minimizing the loss function and derives results similar to those of traditional neural collapse.
Strengths: The paper is well-written and has a good structure.
The authors provide a detailed explanation of NRC following the framework of NC. They validate NRC through experiments on multiple datasets and discuss the role of regularizers.
The paper also includes comprehensive proofs, demonstrating NRC's performance under the UFM setting.
Weaknesses: - It seems that Neural Multivariate Regression simply turn the final layer's classification into a regression loss? And the two are just different in form, but ultimately can both be written as MSE loss, making little difference in analyzing the NC phenomenon.
- Since NC is a model-agnostic framework, after changing the task, how are the conclusions of NRC different from those of traditional NC? I think this should be further clearly explained.
Technical Quality: 3
Clarity: 3
Questions for Authors: - If the final layer regression directly uses the features $H$ learned by the preceding neural network through weights $W$, and $H$ already has good properties, can it be understood that the regression model has no actual significance?
- Does the regression here only refer to linear regression? Can nonlinear forms, such as GP regression, also be considered?
- In line 222-225, why the norms in UFM regression cases are finite?
- In Figure 5, it seems that as the value of the regularizers decreases, the effectiveness of NRC decreases while the performance of MSE improves. What is the reason for this? Additionally, could the experimental results from Section 4.4 (removing regularization), be added to the figure for comparison?
- In the experiments validating NRC, the authors used six datasets. However, when testing the effect of the regularizer, did you only select one of these datasets? How does it perform on the remaining datasets?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for the detailed review and helpful comments as well as for recognizing that our work “derives results similar to those of traditional neural collapse”.
We acknowledge that the analysis for NRC bears some similarities with traditional NC for classification tasks with MSE loss. In our proofs, we leveraged some existing results from UFM theory for MSE loss. However, the regression problem is substantially different from the MSE classification problem, resulting in a very different definition of neural collapse. In classification with balanced data, neural collapse refers to the convergence of the last layer features and weight matrix to an ETF geometric structure. In contrast, in multivariate regression, the feature embeddings converge to an n-dimensional subspace spanned by the principal components of the features, which converge to the subspace spanned by the row vectors of the weight matrix $W$. Moreover, in regression tasks, the weight matrix satisfies distinct properties that depend on the covariance matrix of the target variables and their eigenvalues. Therefore, the qualitative nature of the final results is substantially different from those for classification. In addition, the NRC properties provide valuable insights for model training in regression. As demonstrated below, we also present training with a fixed $W$ based on the neural collapse solution, which improves training efficiency without compromising performance. We anticipate that further exploration will reveal additional implications, which we leave for future work.
**Questions:**
* Indeed, the weights $W$ and features $H$ in the final regression layer have desirable properties, as highlighted by Theorem 4.1. Consequently, instead of learning the final regression layer, we can design the weight matrix $W$ as a random matrix such that $WW^{\top}=\Sigma^{1/2}$ as suggested by Theorem 4.1. By training the model with the weights $W$ kept frozen, we can significantly reduce the number of parameters and the computational complexity of training, while still achieving comparable performance in terms of both training/testing MSE and NRC metrics, as illustrated in Figure D.
* This paper focuses on deep regression with a final linear layer, where the network is highly overparameterized and capable of learning a wide range of representations. In lines 217-221, we discussed the resemblance of the UFM with standard (multivariate) linear regression. On a high level, one can regard the features $h_i$, $i=1,...,M$, as the inputs to linear regression. Then, the objective becomes the minimization of the squared loss between the predicted outputs $Wh_i+b$ and the targets $y_i$. In standard regression, the $h_i$’s are fixed inputs, whereas in the UFM, we are optimizing over the weights $W$, the biases $b$ and most importantly over the “inputs” $H$. As for GP regression, this is indeed an interesting research direction, but one that is certainly very rich and beyond the scope of the current paper.
* The norms of $W$ and $H$ have different training dynamics under classification and regression tasks. In classification tasks with CE loss and no regularization, once features can be perfectly separated (during TPT), the training objective will always decrease if we fix the direction of H and W and only increase their norms (similar to temperature scaling), with the optimum attainable only at infinity. However, with MSE loss without regularization, the loss function increases towards infinity as the norm of the predictor Wh_i approaches infinity. The MSE loss function therefore forces the norms of the predictors to be finite on its own without the need of regularization. The actual optimal solutions for W and H are characterized in Theorem 4.3 (no regularization case) which shows that there are actually an infinite number of finite-norm solutions.
* In Figure 5, the train MSE decreases when decreasing the regularization constants, which mirrors the decrease of the train MSE in Figure 4, when decreasing $\lambda_{WD}$. This observation aligns with modern machine learning practices, where training MSE tends to be lower with no or weaker regularization. Moreover, Figure 5 exhibits a phase change in NRC, with NRC being more pronounced with stronger or higher regularization constants. Following your suggestion, we have added the experimental results corresponding to no regularization to Figure 5; see Figure B in the rebuttal PDF for these additional results.
* Figure 4/A has been updated to include experiments on all six datasets, providing a comprehensive validation of the impact of regularization. We have also updated Figure B to include the Reacher, Swimmer, Hopper, and CARLA2D datasets. Due to space limitations, CARLA1D and UTKFace are not included in Figure B and will be added in the future revision.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: Thanks for your detailed explanation. I will raise my score to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for your insightful review, your comment, and for raising your score. | Summary: This paper investigates Neural Regression Collapse (NRC), a new form of Neural Collapse observed in multivariate regression tasks. NRC is characterized by three phenomena: (NRC1) last-layer feature vectors collapsing to the subspace spanned by the principal components of feature vectors, (NRC2) feature vectors collapsing to the subspace spanned by weight vectors, and (NRC3) the Gram matrix for weight vectors converging to a form dependent on the targets' covariance matrix. Empirical evidence from various datasets and architectures supports NRC's prevalence. The Unconstrained Feature Model (UFM) explains these phenomena, indicating NRC emerges with positive regularization parameters. This study extends Neural Collapse to regression, suggesting a universal deep learning behavior.
Strengths: The paper addresses the significant issue of Neural Collapse in regression tasks, extending its understanding beyond classification and suggesting a universal behavior in deep learning models.
Weaknesses: in line 601, for $\left[ \tilde{\Sigma} - \sqrt{Mc} \right]_{+}^{\frac{1}{2}}$, there is no \( \mathbf{I}_n \) in the matrix.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could the authors explain in detail how Eq. (24) and Eq. (25) are derived in the appendix?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: see Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for the detailed review and helpful comments. We also thank you for recognizing NRC as a “new form of Neural Collapse observed in multivariate regression tasks” and that “this study extends Neural Collapse to regression, suggesting a universal deep learning behavior”.
**Weaknesses:**
Indeed, in line 601, in the matrix that you quoted, multiplication of $\sqrt{Mc}$ by $\mathbf{I}_n$ is needed. We have spotted this typo in our manuscript and corrected it.
**Questions:**
Finally, let us now elaborate on the derivation of Eq. (24) and Eq. (25) in the Appendix. The proof of Theorem 4.1 leverages [Zhou et al., 2022a, Lemma B.1]. For clarity, we have restated their lemma in our notation, see Lemma D.1 in Appendix D. In order to derive $WH$, Eq. (23) in our work, Zhu et al., first showed that, if $w_i$ and $h^i$ denote the $i$-th column of $W$ and $H^T$ respectively, then, when $w_i\neq 0$ and $h^i\neq 0$ (if this is not the case, then $w_i=h^i=0$), we have that
$$
w_i = u_i ||w_i||, \qquad h^i = v_i ||h^i||, \qquad \lambda_H ||h^i||^2 = M \lambda_W ||w_i||^2, \qquad \sqrt{\frac{M \lambda_W}{\lambda_H}} ||w_i||^2=\sigma_i - \sqrt{M c},
$$
with $\sigma_i\ge \sqrt{M c}$, for all $i=1,...,n$, where $\sigma_i$ is the $i$-th singular value of $\tilde{Y}$ and $u_i$, $v_i$ are the corresponding left and right singular vectors. Rewriting the equation above (see Eq. (22) in [Zhou et al., 2022a, Lemma B.1]) in matrix notation readily yields Eq. (24) and Eq. (25) in the Appendix. We will include this clarification in the Appendix.
---
Rebuttal Comment 1.1:
Comment: I sincerely apologize. While reviewing, I initially wrote the comments in markdown, but it seems that I didn't manage to paste all of them. Overall, the Neural Collapse in regression is very interesting, and this is a work of significant originality and innovation. I appreciate the value of this research.
---
Reply to Comment 1.1.1:
Comment: We thank you for your review and your appreciation of our work! | Summary: This paper studies the neural collapse phenomena in neural multivariate regression. The authors rigorously analyze the neural collapse behavior using a simplified model that only includes the last two layers. It was shown that in the multivariate regression task, the last layer of the neural network would collapse to a certain structure that aligns with the principle components of the last layer features. Moreover, the authors highlight the importance of regularization in the prevalence of neural collapse and argue that small regularization may lead to a non-collapsed solution. Experimental results on various real-world datasets are presented to support the theoretical findings.
Strengths: The presented study is novel, it discovers the neural collapse phenomenon in multivariate regression, extends the boundary of neural collapse, and provides new understanding of neural multivariate regression. The theoretical results and experimental results are solid and well-organized.
Weaknesses: My major concern is the significance and potential impact of the results:
1. Neural collapse under MSE loss has been extensively studied in the literature already [1]. The analysis technique in this paper is not new, and the only difference is in the distribution of Y.
2. The authors don't provide enough evidence to support the potential impact of neural collapse in multivariate regression. For example, does neural collapse imply better generalization or robustness in regression [2]? Does fixing the last layer to be neural collapse help the training [3]?
Overall I would encourage the authors to add more discussion about the practical message to increase the impact of the current paper.
[1] Neural Collapse Under MSE Loss: Proximity to and Dynamics on the Central Path
[2] Prevalence of Neural Collapse during the terminal phase of deep learning training
[3] A geometric analysis of neural collapse with unconstrained features
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Since the neural collapse is primarily for the terminal phase of training, it is helpful to include $R^2$ of each experiment to confirm the network has entered the terminal phase of training.
2. Is there any specific reason for not regularizing on the bias term in the UFM (Equation 1)?
3. In theorem 4.3, it was shown that without regularization, there are an infinite number of global minimums that are not collapsed solutions. However, it was well known that linear regression trained with gradient descent is implicitly biased towards minimum norm solution. Therefore I am curious about if collapse will happen in the optimization result on UFM.
4. Discussing the difference with [1] is beneficial, it shows neural collapse happens in classification tasks without any regularization, while the current work shows this is not the case in regression.
[1] An unconstrained layer-peeled perspective on neural collapse
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have addressed the limitations properly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the detailed review and helpful comments. We also thank you for recognizing that the “study is novel,” “provides a new understanding of neural multivariate regression,” and that the “theoretical results and experimental results are solid and well-organized”.
We agree, however, that we didn’t sufficiently emphasize the significance and impact of the results. Indeed, neural collapse for the classification problem under MSE loss has been extensively studied. We will update our paper to fully describe our contribution with respect to earlier works on MSE loss for classification. Indeed in our proofs, we leveraged important existing results from the UFM theory for MSE loss. But the regression problem remains substantially different from the MSE classification problem, a difference which is highlighted by a very different definition of neural collapse. Classification with a balanced data set gives rise to an ETF geometric structure, whereas multivariate regression gives rise to a distinctly different structure defined by our NRC1-3 definitions which involve subspaces spanned by principal components and the covariance matrix of the targets. This new definition requires an entirely new empirical analysis on entirely different datasets. And although the mathematical UFM derivations for regression mirror those for classification, the qualitative nature of the final results - involving the covariance matrix of the targets and their eigenvalues - is substantially different than those for classification.
In terms of impact, the NRC properties offer valuable insights into model training for regression tasks. For example, we can fix the last layer to neural collapse solutions by setting the weight matrix $W$ to be a random matrix which satisfies $WW^{\top}=\Sigma^{1/2}$ as suggested by Theorem 4.1. By training the model with the weights $W$ kept frozen, we can significantly reduce both the number of parameters and the computational burden, while still maintaining comparable performance in terms of training/testing MSE and NRC metrics, as shown in Figure D. We will include this figure and the corresponding discussion in the revised version. Thank you for your suggestion. We believe that the empirical and theoretical results in this paper will guide the future design of multivariate regression problems.
Thank you for your suggestion to investigate R2 during the terminal phase of training. In Figure A of the supporting PDF we provide the $\mathbf{R^2}$ results, which confirm the network has entered the terminal phase of training.
We omitted regularization of the bias term for the UFM for two reasons. First, in practice, it rarely improves performance. Second, it allows for a cleaner presentation of the theorem statements and proofs. It could be easily added without changing the main takeaway points.
Thank you for your interesting comment about no regularization. We agree that gradient descent often exhibits an implicit bias towards a minimum norm solution. We have added the curve corresponding to $\lambda_{H}=\lambda_{W} = 0$ in Figure B, which indeed shows to be the case, with the NRC metric typically decreasing as training progresses. This is consistent with the observation in the layer-peeled paper, as you indicated. We will highlight this in the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response and the efforts of additional experiments. I don't have further concerns and would like to raise my score to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for your insightful review, your comment, and for raising your score. | Summary: The authors explore a new notion of neural collapse which has been formulated to accommodate multivariate regression tasks. NC was originally introduced and recognized as an artifact of multi-class classification tasks. While there has been extensive research into the phenomena of neural collapse for classification, this paper is interesting in that it is the first theoretical study of NC for regression problems. The results are novel and the methodology is sound.
The paper is nice because it expands the view of NC in deep learning to cover regression as well as classification. The authors shed further light on this phenomenon and their results fit nicely in the existing literature. Being a new viewpoint (e.g. regression analysis) on a well-studied phenomenon (e.g. neural collapse), there is some question as to whether their formulation is the right approach. For me, their definition seems compelling.
Essentially, the idea is that instead of looking at the class means or class features and the simplex formed by those vectors in embedding space, you instead look at the $n$ principal components of the sample features in embedding space where $n$ is the dimension of the target space (e.g., n=1 for univariate regression). In the same way that NC for classification says the class feature means collapse to an ETF, for the regression setting, the sample features collapse to the $n$ principal components in embedding space.
Strengths: The paper is well-written and well motivated. This is an interesting problem and it's nice to see such a well thought out first attempt at extending an important phenomenon (i.e. neural collapse) to a more broad task setting. The theory is solid and well supported with clear and careful proofs.
Weaknesses: It would be nice to see more comprehensive and consistent experiments (I've outlined suggestions below). The authors are introducing a new concept; or more specifically, they are reframing a well-known concept to a new setting (i.e. neural collapse to neural regression collapse). It is of course impossible to include all possible datasets and architectures but when claiming such a novel and fundamental property of deep learning there is an additional onus on the authors to be as comprehensive as possible.
Overall, the primarily weaknesses are surrounding the experiments which good but lacking. There are a few things missing from the experiments that I would like to see and would alleviate some of my confusion around these results:
1. None of the plots in Figures 1-6 include error bars. Were multiple seed trials run for these experiments? This seems to be a glaring omission
2. Indicate the value of \gamma that is being used for the NRC3 plots in Figure 2.
3. in Figure 3, is this unique (?) minimizer for \gamma/\lambda_{max} stable or consistent across multiple trials with differing seeds and initializations?
4. the representation of datasets is not comprehensive and inconsistent. I understand not all plots can be contained in the main paper but it would be nice to see them in the Appendix. For example Figure 4 and 5 (which examine similar quantities) have Figure 4 looking at CARLA2D & Swimmer and Figure 5 looking at Swimmer and Reacher. Figure 3 does not include CARLA1D or UTKFace datasets. Why?
Technical Quality: 3
Clarity: 3
Questions for Authors: There is extensive analysis of the neural collapse to the $n$ principal components. Did you do any work to justify that $n$ is the right number of principal components to assume? The embedding dimension could be quite large in comparison. How would your analysis change if you let $n$ be different from the dimensionality of the targets? For example, what happens to your NC conditions when you take $n=1$ for one of your robotic datasets? Or set $n=3$ for I realize this doesn't lend itself to as clear an interpretation (if any) but to play devil's advocate it would be helpful to see that analysis, particularly in the form of some explained variance ratio. For classification tasks, the structure of the ETF during neural collapse is very natural. For a regression task, particularly when introducing a new definition, it requires an additional justification or at least an additional analysis.
High level question out of curiosity: There are standard ways to convert a (univariate) regression task into a multi-class classification task (e.g. via quantile regression). Similarly, there are standard ways to express a classification task as a regression taks. How would the standard notion of NC for classification compare with your formulation of NC for regression between these two formulations? It would be interesting to see how analaglous or stable this proposed definition is under such a re-framing.
line 71 (clarification): you state that when the regression parameters are zero or very small, there is no collapse. Does that mean all NC1 - NC3 fail? or just some?
line 77: you mention that this framing of NC for regression can lead to more explainable models and potentially more efficient training processes. Can you reasonably justify this claim? How exactly? Do you have any references or reasonable examples (albeit from the classification setting) that you can point to and how those assumptions would compare to a regression task?
In the related work (lines 88-92) you discuss previous work concerning NC for classification tasks when the classes are imbalanced. In the imbalanced classification setting, some of the original NC properties no longer hold and/or they must be reformualted to account for the class imbalance. How does one translate the notion of balanced dataset for regression task? This aspect seems to have been ignored entirely. Can you add some more detail justifying why it's been ignored or how it relates to your current framing?
line 150: you state "typically $d=256$ or larger and typically $n \leq 5$. Typically with respect to what?
line 199-201: you state "This indicates that NC is…a fundamental property of multivariate regression" This feels like an overstatement. This paper is examining 6 datasets over two model classes. The results are certainly very promising but the world of multivariate regression and model architectures can be vastly larger.
In Figure 2: NRC3 states that there exists a constant \gamma \in (0,\lambda_min) such that the derived quantity goes to zero. What value of \gamma are you using in the plots for NRC3 here? According to Figure 3 it seems that there is a unique (?) optimal value for \gamma which minimized NRC3
line 202: you mention that each dataset (excluding CARLA1D and UTKFace) exhibit a unique minimum value of NRC3 for the range of \gamma's explored. Why is that the case? I may have missed this in the exposition and theory.
In Figure 3: Why are CARLA1D and UTKFace not included? Presumably bc these have n=1. One could still evaluate \lambda_{max}. What do those corresponding plots look like for these univariate datasets?
In Figure 4: you only show results for CARLA2D and Swimmer datasets. Where the other datasets not included in your experiments? It would be nice to see comparable results in the appendix at least, particularly, if you're claiming the ubiquity of these results for multivariate regression tasks.
line 207: you claim the geometric structure you propose NC1-NC3 is due to the regularization and, in fact, is not exhibited without at least some regularization. Particularly, when there is no regularization why does NRC occur (in any criteria)? And do the models correspondingly not converge? Or do they? Can you include the Train MSE (and/or Test MSE) in Figure 4 as you do in Figure 5? This point about the regularization constant seems important but admittedly remains unclear to me. You mention that this is addressed later (e.g. section 4.4), but the subsequent section is focused only on the UFM model. There is an underlying assumption that the UFM model encapsulates the NRC setting similar to what has been demonstrated for classification NC. As for intuition, I agree. But a rigorous logical connection for the regression case remains unclear to me.
In Figure 5: I assume when \lamba_H=\lambda_W=0, we would see the training diverge or at least the NRC values diverge? Is that true? What happens in this setting that connects with what we see in Figure 4.
(typos/nits)
In Figure 4: I would change the labeling of the y-axis to denote the quantity being measured (e.g. NRC1 - NRC3) and use the plot titles to indicate the dataset. However, I guess this would turn your 2x3 plot into a 3x2 plot and not be the most efficient use of space. Perhaps you can do something similar to how you've displayed Figure 5.
Figure 4 and Figure 5 seem comparable to me (an exploration of the NRC1-NRC3 values for small weight decay regularization constants). But why is there an inconsistency between the datasets examined. For Figure 4 you look at CARLA2D & Swimmer and for Figure 5 you look at Swimmer and Reacher. Why? This is even more glaring because the remaining datasets aren't included in the appendix either. Is that because the results aren't as good?
Figure 5: the caption is not very descriptive. I recommend adding more detail here.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes, the authors have addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your very detailed and insightful review, and for all the positive comments you made regarding our work. We also fully agree that the presentation of the experimental results can be improved. During the rebuttal week, we have worked very hard, running additional experiments and reorganizing our figures to respond to your comments. We believe our experiments are now “comprehensive and consistent”. We promise to include _all_ additional results below in the future revision.
We ran all experiments with 3 random seeds and plot the variance across seeds as the shaded area. As shown in all figures, there is little change across seeds, confirming that NRC consistently emerges in regression.
Concerning your question about 𝜸 in Figure 2: We ran all experiments for long enough to ensure the training enters TPT measured by $\mathbf{R^2}$ (See Figure A). After training, we extracted the $W$ matrix and identified 𝜸 that minimizes NRC3 for that specific $W$. This 𝜸 was then used to compute the NRC3 metric for all $W$ matrices during training, resulting in the NRC3 curves shown in Figure 2. Figure 3 visually shows NRC3 as a function of 𝜸 for the final trained $W$. Mathematically, we can show that if $\lambda_{WD}$ is sufficiently large, NRC3(𝜸), as given in the definition of NRC3, is convex and has a unique minimum. Empirically, our experiments show that 𝜸 w.r.t. minimum NRC is consistent across different seeds.
We very much liked your suggestion to examine the explained variance ratio. The results are striking as shown in Figure C: for all datasets, there is significant variance for the first $n$ components; for other components, there is very low or even no variance. We also examine different definitions of NRC1 with $n$ varying from 1 to 4 below to justify the choice of $n$ being equal to the target dimension.
||NRC1_pca1|NRC1_pca2|NRC1_pca3|NRC1_pca4|
|:-:|:-:|:-:|:-:|:-:|
|Reacher|1.01e-2|4.02e-4|2.55e-6|2.27e-12|
|Swimmer|5.48e-1|3.96e-11|1.10e-11|5.02e-12|
|Hopper|0.617|0.558|0.499|0.129|
Concerning your question about converting a univariate regression problem to a classification problem, and vice-versa. We believe this is an interesting direction for future work. Of course, if $n$ is large, it would become difficult to convert from regression due to quantization and the resulting classification dataset would likely be highly imbalanced. In classification, as you state, the dataset can be balanced or imbalanced, and there are NC studies for both. If we were to convert a univariate regression dataset to a classification dataset using quantization, then the dataset would be balanced if there are the same number of $y_i$ values in each quantile. As this property is likely unrealistic for most datasets, then we can conclude neural regression has a stronger connection to classification with imbalanced datasets.
We believe that the UFM theory provided in the paper helps to explain many properties of neural regression. In terms of efficiency, experiments in Figure D show that we can fix W to a random matrix such that $WW^{\top}=\Sigma^{1/2}$ as suggested by Theorem 4.1, and use this fixed matrix throughout training. This approach significantly reduces the number of weights to be optimized and results in similar performance and NRC metrics, paralleling a well-known approach in NC for classification.
The number of features in the penultimate layer $d$ is typically 256 or larger for deep learning. In multivariate regression, the target dimension $n$ is typically small in comparison. For example, in most MuJuCo environments, often used to study imitation and reinforcement learning, satisfy $n \leq 6$.
We agree that “NC is…a fundamental property of multivariate regression” is a bit of an overstatement. We will rewrite this to say “often occurs in neural multivariate regression”.
We ignored the plots for 𝜸 and NRC3 for univariate regression with $n=1$. For $n=1$, $\lambda_{max} = \sigma^2$, the variance of the 1D targets. This is just a scalar value for each of the $n=1$ datasets and does not seem to lead to an insightful plot.
Concerning your question about which NRC metrics fail when regression parameters are zero or very small. Figure A and B show when weight decay approaches zero, NRC1-3 typically become larger, compared with NRC1-3 obtained with larger weight decay values.
Particularly, when there is no weight decay, we run more training epochs to verify the asymptotic behavior of NRC1-3 and test MSE in Figure A. We observe that NRC1-3 has a strong tendency to converge (There is a relatively small amount of collapse since gradient descent tends to seek solutions with small norms.), while the test MSE increases on small MuJoCo datasets. Theorem 4.3 provides some insight: when there is no regularization, there is an infinite number of non-collapsed optimal solutions under UFM; whereas Theorem 4.1 shows that when there is regularization, all solutions are collapsed. When there is regularization, we are seeking a small norm optimal solution, which leads to NRC1-3.
The main assumption in the UFM model – for both classification and regression – is that the neural network is capable of mapping any set of training inputs to any set of feature vectors. Based on this assumption, the UFM model leads to a new optimization problem for classification and regression. We believe the UFM model for regression makes the same logical connection that is made for classification.
We have included UFM experiments with no regularizer in Figure B. Similarly as in Figure A, NRC1-3 do not converge to very low values as is when there is regularization. As indicated in section 4.5, Figure 5/B is intended to validate the results under the UFM framework. To align with the UFM assumption, we remove ReLU in the penultimate layer such that the learned feature can be any real number instead of being restricted to positive values.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed reply.
Just to clarify, do I understand correctly that all of the experiment plots you present in the original paper (Fig. 2 - Fig. 5) have been run with 3 different seeds? It's hard for me to see any shaded error bars on the plots themselves (except for some of the plots in Fig B of the rebuttal). Are you saying there is that little change across seeds for all experiments?
Thanks for the additional experiments you've included in the rebuttal document. In particular Fig. C with the Explained Variance Ratio seem nice and i would recommend including in the final version of the paper.
Thanks for the additional work for the rebuttal. I'm glad to hear you feel the comments have helped to improve the paper. I will keep my score as it is.
---
Rebuttal 2:
Comment: Thank you for your comment. The original paper used only one seed. In response to your review, we re-did all the experiments over all 6 datasets with 3 seeds, and the results are shown in the figures of the rebuttal. The shaded regions are generally narrow, showing little change across seeds for most (but not all) of the datasets.
We will include Figure C (and all of the other updated figures in the rebuttal) if accepted. Thanks again for your excellent review. | Rebuttal 1:
Rebuttal: Since 2020, when [Papyan et al., 2020] published their seminal paper on neural collapse, there has been a flurry of activity in the area, with at least a dozen papers on the topic of neural collapse published in major machine learning venues. To our knowledge, this entire stream of important research is focused on the classification problem.
As discussed in the introduction of our paper, arguably, regression is at least as important as classification in modern machine learning. To our knowledge, our paper under review to NeurIPS is the first paper to examine regression, including multivariate regression, in the context of neural collapse. The paper first proposes a brand-new definition for neural collapse for regression with a very different geometric description than the ETF definition appropriate for classification. The paper then presents extensive experimental results establishing that multivariate regression indeed exhibits neural collapse as defined by this new definition. In this rebuttal, we supplement our original experimental results with a suite of new experimental results, as requested by the reviewers, further confirming the prevalence of neural collapse in regression. Our submission also considers an approximation in which the neural network is assumed to be capable of mapping any set of training inputs to any set of feature vectors, the so-called Unconstrained Feature Model (UFM), which has also been used to analyze neural collapse in classification. Under the UFM model, we derive the explicit solutions for the optimal features, last-layer linear weights, predictors, and MSE training error, and show that the theoretical results match the empirical results, providing evidence that the UFM model can help explain the emergence of neural collapse in regression.
The paper was reviewed by four expert reviewers, and all four of them appeared to be positive about the paper. Reviewer vXK1 writes: “this paper is interesting in that it is the first theoretical study of NC for regression problems. The results are novel and the methodology is sound.” The reviewer goes on to say, “the paper is nice because it expands the view of NC in deep learning to cover regression as well as classification. The authors shed further light on this phenomenon and their results fit nicely in the existing literature…there is some question as to whether their formulation is the right approach. For me, their definition seems compelling.” The same reviewer also writes, “this is an interesting problem and it's nice to see such a well thought out first attempt at extending an important phenomenon (i.e. neural collapse) to a more broad task setting. The theory is solid and well supported with clear and careful proofs”. Reviewer 6Hcy writes, “the presented study is novel, it discovers the neural collapse phenomenon in multivariate regression, extends the boundary of neural collapse, and provides a new understanding of neural multivariate regression. The theoretical results and experimental results are solid and well-organized.” The reviewer T8Jo writes, ”the paper addresses the significant issue of Neural Collapse in regression tasks, extending its understanding beyond classification and suggesting a universal behavior in deep learning models”. And reviewer 8FCG writes, “the paper introduces a novel extension of neural collapse to neural regression collapse” and further writes, “the paper is well-written and has a good structure,” and “the paper also includes comprehensive proofs, demonstrating NRC's performance under the UFM setting”.
Although there is a consensus that the paper is novel and interesting, the reviewers have raised many important questions which is perhaps why they originally scored the paper as a “marginal accept” rather than a “strong accept”. Since receiving the reviews, we have worked hard at running complementary experiments (as shown in Figures A, B, C, and D in the attached PDF file) and writing the rebuttals, addressing all of their comments and suggestions. We hope that the reviewers and the area chair will find our rebuttal satisfactory.
Pdf: /pdf/0d366bea71f8b75f68fb484d2d1491dd1fa135e4.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Breaking Long-Tailed Learning Bottlenecks: A Controllable Paradigm with Hypernetwork-Generated Diverse Experts | Accept (spotlight) | Summary: This paper aims to overcome potential distribution shifts from a single training distribution to any testing distribution and adapt to different user preferences for trade-offs between head and tail classes. The proposed method leverages hypernetworks to generate a diverse set of expert models, enabling the system to adapt to any test distribution and cater to user-specific preferences. Extensive experiments demonstrate the method's superiority in performance and adaptability, providing new insights and expanding the applicability of long-tailed learning in practical scenarios.
Strengths: 1. Addressing the issue of test distribution invariance and incorporating diverse user preferences for trade-offs between head and tail classes is both meaningful and practical.
2. The paper offers theoretical foundations and practical results.
3. Experiments validate the effectiveness of the proposed method.
Weaknesses: 1. Some important concepts require further explanation. For example, what is the meaning of "environment" in Sec.3? It is unclear why to minimize empirical risks across multiple training environments since the training set is typically fixed. Besides, I wonder how to ensemble the experts when the test distribution varies. It seems that the authors did not explain this key issue.
2. In Section 5.1, the paper states "use $\alpha = 1.2$ for the Dirichlet distribution". However, according to Eq.(5), $\alpha$ should be a vector. It is recommended to provide a more detailed explanation.
3. Some more competitive baselines such as PaCO$^1$, DDC$^2$, and DirMixE$^3$ should be considered for comparison.
4. There are some typos, such as in line 36, where "Simply pursuing the overall optimal solution may not meet this flexibility requirement. [16, 29, 36]" should be "Simply pursuing the overall optimal solution may not meet this flexibility requirement [16, 29, 36]." In line 95, $\pi_m^k$ should be $\pi_k^m$.
5. Some figures, such as Figure 1 and Figure 3, are not referenced in the text.
------
$^1$ Parametric contrastive learning, ICCV 2021
$^2$ A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment for Imbalanced Learning, NeurIPS 2023
$^3$ Harnessing Hierarchical Label Distribution Variations in Test Agnostic Long-tail Recognition, ICML 2024
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to the questions in the weakness section.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1:**
Thank you for raising the question. Let me try to further clarify some key concepts based on the content of the paper:
- **Concept of environment:**
In Section 3, "environment" refers to a dataset with different class prior probability distributions. Each environment Em has its own class prior probability vector πm. Traditional empirical risk minimization (ERM) methods only train on a single training distribution, making it difficult to handle distribution differences between environments. Therefore, we propose to minimize the empirical risk in multiple training environments to learn a diverse set of experts that capture the distributional characteristics of different environments.
- **Motivation for minimizing empirical risk in multiple training environments:**
Although the training set is usually fixed, the authors believe that by constructing multiple training environments with different class distributions, the distribution shifts during testing can be better addressed. By learning a set of experts on these environments, the model can capture the characteristics of data under different long-tailed distributions, thereby gaining stronger generalization ability and distribution adaptability. This approach can reduce the distribution difference between the training environments and the testing environment.
- **How to integrate experts during testing:**
During testing, we introduce a preference vector α to control the trade-off between head and tail classes. Given the well-trained preference vector r, the authors calculate the preference vector r for testing and input it into the hypernetwork to generate the classifier head parameters for each expert. By adjusting α, the model's attention to head or tail classes can be controlled, achieving flexible trade-offs. You can refer to the answer to W4 for reviewer VYLm.
**W2:** Thank you very much for the reviewer's question. Our expression in the paper may not be clear enough, leading to some misunderstandings. Please allow me to provide a detailed explanation here.
In our method, we made a special setting for the hyperparameters α of the Dirichlet distribution. Specifically, we set α to a vector of length 3, where each component is equal to 1.2, i.e., α=[1.2, 1.2, 1.2]. This means that we are actually sampling independently from three Dirichlet distributions with the same parameters, i.e., [Dirichlet(α), Dirichlet(α), Dirichlet(α)].
This setting is quite common in the literature related to hypernetworks and multi-objective optimization. For example, in the paper [reference number] that we refer to, the authors also adopt a similar approach, setting the hyperparameters of the Dirichlet distribution to a vector with the same elements. Additionally, we explored using different hyperparameter settings and reported them in the main text. To avoid similar confusion, we will provide a clearer explanation of this point in the revised version.
Thank you again for the reviewer's careful review and valuable comments, which help us further improve the quality and readability of the paper.
**W3:** We have supplemented the necessary baseline comparisons in the appendix and will include them in the official version. Thank you for your suggestion.
**W4:** Thank you for your comment. We will correct these errors in the official version.
**W5:** Thank you for your attention to detail! We will add these references in the official version.
---
Rebuttal 2:
Comment: Thank you for your efforts on the rebuttal! The authors have clarified some important details about my concern. Hence, I decide to raise my rating to 7.
---
Rebuttal Comment 2.1:
Title: Thanks
Comment: Thank you very much for your thoughtful feedback on our paper. We are truly grateful for your careful consideration of our rebuttal and the time you've taken to reassess our work.If you have any further thoughts or suggestions, we would be more than happy to hear them. | Summary: This paper addresses long-tailed learning with a focus on tackling distribution shift and accommodating different user preferences for the trade-off between head and tail classes. The authors propose a method called PRL, which generates a set of diverse expert models via hypernetworks to cover all possible distribution scenarios, optimizing the model ensemble to adapt to any test distribution. Experiments across various long-tailed datasets validate the method's effectiveness.
Strengths: - This work is well-motivated. Addressing distribution shift and accommodating different user preferences for the trade-off between head and tail classes is highly practical.
- The authors provide a range of experiments to demonstrate the method's effectiveness.
Weaknesses: - The paper uses $\theta$ and $\phi$ in Figures 3 and 4, but they are not defined beforehand.
- In Section 5.2, the paper investigates the model's performance under different preference values $R$ but does not explain their meaning.
- In Section 4.4, it is unclear how the trained preference vector is obtained.
- I am confused about how to ensemble the experts after training is complete.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see above.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1:** Thank you for your correction. $R = (\theta, \phi)$ represents our preference vector, where $\phi$ is the radian representation mentioned in Equation (14). Figures 3 and 4 depict the control of different preference vectors on model performance.
**W2:** $R = (\theta, \phi)$ represents our preference vector, where $\phi$ is the radian representation mentioned in Equation (14). Thank you for your correction. We will add revisions in subsequent versions to improve readability.
**W3:** Thank you for your question. During the training process, the preference vector is obtained through training as a learnable parameter. During the testing process, the preference vector is jointly determined by the learned preference vector from the training stage and the input preference vector according to Equation (14). For the specific process, please refer to the answer to your W4.
**W4:** In the training stage, we optimized the joint loss function of all experts using the Stochastic Convex Ensemble (SCE) strategy, obtaining a set of well-trained expert models and a hypernetwork. After training is completed, we can dynamically integrate these expert models based on the user-specified test preference vector to adapt to different testing scenario requirements. The specific steps are as follows:
1. Calculate the preference vector for testing:
$$\hat{\mathbf{r}} = (\mathbf{r} \odot \mathbf{\alpha^*}) \oslash (\mathbf{r} \cdot \mathbf{\alpha^*})$$
where r is the training preference vector, ⊙ represents the Hadamard product, ⊘ represents element-wise division, and · represents the dot product.
2. Input $\hat{\mathbf{r}}$ into the hypernetwork to generate the test-time classifier head parameters for each expert:
$$\hat{\mathbf{W}}_ i = h_\psi(\hat{\mathbf{r}}), \forall i \in \{1, \dots, T\}$$
3. For a test sample x, the output of the $i_{th}$ expert can be calculated using the following formula:
$$\hat{\mathbf{y}}_i = \hat{\mathbf{W}}_i^\top \mathbf{x} + \hat{\mathbf{b}}_i \quad \text{or} \quad \hat{\mathbf{y}}_i = (\hat{\mathbf{W}}_i/\|\hat{\mathbf{W}}_i\|_F )^\top (\mathbf{x}/\|\mathbf{x}\|_2)$$
4. The final ensemble output can be obtained by a weighted combination of all expert outputs. The weights can be uniform or set according to the performance of experts on the validation set.
By adjusting the test preference vector, we can dynamically change the ensemble weights of the expert models, achieving flexible trade-offs between head and tail classes to meet different application requirements.
We appreciate the reviewer for pointing out this shortcoming. If there are any further questions or suggestions, we welcome discussion at any time. Thank you again for the valuable comments from the reviewer.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' efforts in the rebuttal. My concerns are addressed. I will raise my rating to 7.
---
Reply to Comment 1.1.1:
Title: Thanks for your efforts.
Comment: We are delighted to see that our response has addressed the concerns you previously raised. We are deeply grateful for your decision to increase the rating to 7, which is a significant encouragement for us.
Your constructive feedback has played a crucial role in improving the quality of our work. If you have any further comments or suggestions, we welcome your feedback at any time. | Summary: This paper addresses the crucial and challenging problem of long-tailed learning under distribution shifts between training and testing data, which is highly relevant to real-world applications. The authors propose a novel and insightful learning paradigm that aims to obtain a set of diverse expert classifiers to adapt to arbitrary test distributions while allowing flexible control of the trade-off between head and tail classes based on user preferences. This new perspective greatly expands the applicability and practicality of long-tailed learning methods, making it a significant contribution to the field.
Strengths: 1. The theoretical contribution of this paper is to analyze the limitations of the traditional empirical risk minimization (ERM) method in dealing with distribution shifts, and to quantify the distribution differences using the concept of environmental total variation gap (ETVD). These theoretical analyses provide sufficient theoretical basis and motivation for the new diversity expert learning method proposed in this paper.
2. The proposed framework elegantly combines hypernetworks and Dirichlet distribution sampling to train multiple diverse experts. This innovative design empowers the model to learn adaptability to a wide range of distributions from a single training set.
3. The comprehensive experiments on multiple benchmark datasets demonstrate the effectiveness of the proposed approach.
4. The work opens up new possibilities for applying long-tailed learning to a wider range of real-world scenarios where the test distribution is unknown or subject to change. The controllable trade-off based on user preferences enhances the interpretability and usability of the model, making it more adaptable to different application requirements.
Weaknesses: 1. I agree with the novelty of this paper. The method proposed by the authors brings a new research perspective to the field. My doubts lie in part of the theoretical interpretation. Regarding the role of the Dirichlet distribution in generating diverse experts, can the authors analyze the effect of the hyperparameters of the Dirichlet distribution on the performance of the algorithm?
2. As for the above question, I hope the author carries out some experimental analysis to support his theory.
3. I think the explanation of Figure 2 seems a bit obscure, and the author would be better off giving a concise explanation.
4. Some symbol errors need to be corrected.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please answer the questions in the Weaknesses, which helps me better understand the theoretical contribution of this paper.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1:** You raised a very insightful question. The probability density function of the Dirichlet distribution is:
$$f(x_1, \ldots, x_K; \alpha_1, \ldots, \alpha_K) = \frac{1}{B(\alpha)} \prod_{i=1}^K x_i^{\alpha_i - 1}$$
where $\alpha=(\alpha_1,\ldots,\alpha_K)$ are the hyperparameters of the distribution, and $B(\alpha)$ is the normalization constant.
Intuitively, the hyperparameters $\alpha$ of the Dirichlet distribution control the characteristics of the generated weight vector $x=(x_1,\ldots,x_K)$:
When $\alpha_i>1$, the generated weight vector tends to take larger values in the $i$-th component; when $\alpha_i<1$, the generated weight vector tends to take smaller values in the $i$-th component; when $\alpha_i=1$, the $i$-th component of the generated weight vector follows a uniform distribution.
Therefore, the value of the hyperparameter $\alpha$ affects the diversity of the combination weight vectors generated by the controller network: when the values of all $\alpha_i$ are large, the generated weight vectors tend to concentrate in the central region, reducing the diversity of expert combinations; when the values of all $\alpha_i$ are small, the generated weight vectors tend to be dispersed in each corner, increasing the diversity of expert combinations; when the values of different $\alpha_i$ differ greatly, the generated weight vectors will have obvious preferences in certain components, leading to an imbalance in expert combinations.
In the long-tailed context, this diversity and imbalance will further affect the model's performance:
- Moderate diversity helps the model adapt to different test environments, but excessive diversity may lead to some extreme weight combinations, affecting the model's stability.
- Reasonable imbalance helps the model focus on certain key experts, but excessive imbalance may cause some experts to be ignored, affecting the model's generalization ability.
Therefore, choosing appropriate Dirichlet distribution hyperparameters is crucial for balancing the diversity and stability of expert combinations. We also report experimental results of other weight combinations in the paper. Thank you again for your question.
**W2:** We have supplemented additional experiments in the Appendix. Please refer to the Appendix.
**W3:** Figure 2 intuitively demonstrates the superiority of the method proposed in this paper in dealing with distribution shifts and flexibly controlling preferences. Figure 2 aims to illustrate two advantages of the proposed method: the ability to overcome distribution shifts and the flexibility of preference control.
In Figure 2, we use two three-dimensional coordinate systems. The first coordinate system represents the value space of the preference vector, with each dimension corresponding to a preference (such as preference for head classes, tail classes, or balance). The second coordinate system represents the model's performance on three distributions (forward50, uniform, and backward50) of the CIFAR100-LT dataset.
The dark plane in the figure represents the value plane corresponding to different preference vectors, while the outer surface represents the corresponding performance on the three distributions. The yellow points are the results of the SADE method. Since its preference is uncontrollable, the results of each run are random points, and they are all located below the purple plane of the method proposed in this paper, indicating that its performance is inferior to the method proposed in this paper (i.e., it is dominated in the Pareto optimal set).
This figure shows that the method proposed in this paper can cover unknown distributions without requiring additional training, and unlike previous methods, it can balance performance on different distributions by adjusting the preference vector. These two advantages will be further analyzed in the experimental section.
**W4:** We will revise the errors mentioned by the reviewer in the formal version. Thank you for your reminder.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer qXeH
Comment: I appreciate the authors' efforts in the rebuttal. My concerns are addressed. I will raise my rating to 7. | Summary: The paper addresses the problem of learning long-tailed distributions, with the imbalance of head and tail classes. The paper introduces a long-tail learning paradigm based on diverse set of experts and hypernetworks. The proposed method can meet personalized user preferences and can adapt to wide range of distribution scenarios. The paper proposes this problem as multi-objective optimization, with the goal to learn whole pareto front. The authors also propose theoretical aspects considering distribution shifts and show that diversity experts methods learns a set of experts to capture the distributional characteristics of different environments, hence reducing the distribution discrepancy between the training and test environment.
Strengths: 1. The paper addresses an important problem of learning long-tailed distribution.
2. The idea introduced based on diverse set of experts using hypernetwork, which can adapt to meet personalized user preferences sounds reasonable.
3. The paper writing seems clear and well written.
4. The extensive experiments demonstrate the proposed method performs better than the baseline.
5. The paper covers theoretical aspects of distribution shifts as well.
Weaknesses: 1. The paper mentions the proposed method to be interpretable and controllable long-tail learning method whereas I don’t think a model with an ability of adapting to preference vectors can be considered interpretable. More explanation of this could be helpful to understand why authors believe the interpretability of the model.
2. While the main problem is centered around preference based long-tail learning method, theoretical proofs just talking about distribution shift is not fitting very well. While the proofs look correct (to the best of my knowledge) and I agree that multiple experts can reduce the distribution discrepancy between training and test environment, why will this be useful for long-tailed distribution is not clear to me. Is there any explanation on why all the experts won’t still focus on head classes ?
3. Chebyshev polynomial has suddenly been introduced in ablation study, a brief mention about it would be beneficial for the readers.
4. There are works which use hypernetworks for MOO [1], I think inclusion of such works would be useful.
5. Inclusion of proofs considering long-tailed distribution would have been more beneficial for this problem setup.
6. Given that this work considers optimizing for the whole pareto front, mention of hypervolume values would be useful for future works in this direction.
Typos
Line 36: requirement. [16, 29, 36] -> requirement [16, 29, 36]
[1] Navon, A., Shamsian, A., Fetaya, E. and Chechik, G., Learning the Pareto Front with Hypernetworks. In International Conference on Learning Representations.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The paper mentions the proposed method to be interpretable and controllable long-tail learning method whereas I don’t think a model with an ability of adapting to preference vectors can be considered interpretable. More explanation of this could be helpful to understand why authors believe the interpretability of the model.
2. While the main problem is centered around preference based long-tail learning method, theoretical proofs just talking about distribution shift is not fitting very well. While the proofs look correct (to the best of my knowledge) and I agree that multiple experts can reduce the distribution discrepancy between training and test environment, why will this be useful for long-tailed distribution is not clear to me. Is there any explanation on why all the experts won’t still focus on head classes ?
3. Additional information about how unknown test class distribution is created would be helpful.
4. How preference vector for testing has been decided ?
5. Hypervolume calculation usually needs information on reference vector, how is that taken into consideration here ?
6. While the proposed method learns entire front in a single model, are the competing methods trained multiple models to cover the pareto front ?
7. What are your comments on scalability of the proposed method for large target networks ?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1 and Q1:** Thank you for your question. The four perspectives in the **public response section** are intended to answer this question. Please refer to the four perspectives in the public response due to space limitations.
**W2 and Q2:** Thank you for your question. The explanation of this part in the original text is indeed brief. It is related to the loss functions that guide different experts. The specific answers have been mentioned in perspectives 2 and 3 of the public response. Here we summarize and will promptly add them to our paper:
- This paper selects three experts with different loss functions: forward expert, uniform expert, and reverse expert. Their different objectives enable different experts to adapt to different class distributions. During training, by sampling weight vectors from the Dirichlet distribution and combining the outputs of the three experts with weighted averaging, the Dirichlet distribution parameters are adjusted to simulate different class distributions. By optimizing the hypernetwork to minimize the expected loss, a set of expert parameters that perform well under different weights is obtained, which is equivalent to minimizing the expected loss of the ensemble model on the test distribution. This adaptive ability enhances the robustness of the model, enabling it to cope with distribution shifts in real-world scenarios.
**W3:** Thank you for your suggestion! In this paper, the Chebyshev polynomial we refer to for implementation uses the log-sum-exp function to replace the max function, which plays a smoothing role (i.e., Section 4.3 of the method). The idea mainly originates from STCH. In the context of long-tailed learning in this paper, its role is to dynamically adjust the importance of experts, thereby better handling the long-tailed distribution problem. We will add necessary explanations and citations to our paper.
**W4:** Thank you for your comment. We will make proper citations in subsequent versions to facilitate a better understanding of this paper.
**W5:** Your question is excellent. Although the focus of this paper is on solving more problems of real-world distribution shifts based on long-tailed methods, some proofs based on long-tailed settings are indeed still necessary. In addition to the response to W1 and W2, we will supplement the necessary explanations in our paper.
**W6:** To facilitate subsequent work in this direction, we will briefly introduce this concept and include it in subsequent versions. Additionally, we have supplemented some content regarding hypervolume in our answer to your Q6. Please refer to it.
**W7:** We will correct the errors in subsequent versions.
**Q3:** To evaluate the robustness of the algorithm under unknown test class distributions, we refer to LADE and SADE and construct three types of test sets: uniform distribution, positive long-tailed distribution, and negative long-tailed distribution. The positive and negative long-tailed distributions contain multiple different imbalance ratios ρ. For ImageNet-LT, CIFAR100-LT, and Places-LT, the imbalance ratio is set to ρ ∈ {2, 5, 10, 25, 50}. For iNaturalist 2018, since each class has only 3 test samples, the imbalance ratio is adjusted to ρ ∈ {2, 3}. We will supplement the explanation in the revised version.
**Q4:** The decision process for the preference vector during testing is as follows:
- By adjusting the user-specified preference vector, the attention of the model between head and tail classes can be controlled, achieving flexible trade-offs to adapt to different application requirements. The preference vector during testing is calculated based on the pre-trained preference vector and the user-input preference vector. (*Due to space limitations, please refer to the answer to W4 for reviewer VYLm.*)
**Q5:** In this method, the computation of hypervolume is mainly reflected in the following two aspects:
The method guides the computation and utilization of hypervolume by introducing reference vector information in both the training and testing stages.
- During training, by sampling preference vectors from the Dirichlet distribution and using a hypernetwork to generate expert model parameters, sampling and modeling of the Pareto front are achieved.
- During testing, by combining the pre-trained reference preference vector and the user-specified preference vector, the preference vector for testing is obtained, reflecting the localization and utilization of the hypervolume.
This setting of reference vectors provides flexibility, enabling the model to adapt to different task requirements.
**Q6:** Regarding your question, I guess the competing methods may refer to some long-tailed learning baselines.
Although some existing competing methods such as LADE and SADE also adopt the idea of multi-expert models, they have considered the trade-offs in long-tailed distributions and used multi-expert structures to some extent, allowing them to cover different regions of the Pareto front to a certain degree. However, the completeness and continuity of the coverage are difficult to guarantee, and controllability cannot be achieved. In contrast, this paper generates a continuous expert space through a hypernetwork, which can more comprehensively approximate and cover the Pareto front.
**Q7:** To evaluate the robustness of the algorithm under unknown test class distributions, we refer to LADE and SADE and construct three types of test sets: uniform distribution, positive long-tailed distribution, and negative long-tailed distribution. The positive and negative long-tailed distributions contain multiple different imbalance ratios ρ. For ImageNet-LT, CIFAR100-LT, and Places-LT, the imbalance ratio is set to ρ ∈ {2, 5, 10, 25, 50}. For iNaturalist 2018, since each class has only 3 test samples, the imbalance ratio is adjusted to ρ ∈ {2, 3}. We will supplement the explanation in the revised version to improve the completeness and readability of the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for the rebuttal. I have raised the rating to 7.
---
Reply to Comment 1.1.1:
Title: Thanks for your efforts
Comment: We are delighted to know that our response has addressed your concerns and has led to an increase in your assessment score.
Your recognition of the improvements made in the revised manuscript is highly appreciated, and it motivates us to continue refining our work to meet the highest standards of quality and clarity. | Rebuttal 1:
Rebuttal: We sincerely appreciate all the reviewers for their valuable comments. Your feedback has helped us improve the quality of the paper and strengthen the arguments. We are pleased that most reviewers have a positive attitude towards our work :).
We are very grateful to the reviewers for acknowledging **our efforts in addressing important and challenging problems in long-tailed learning (Reviewers LGeA, qXeH, Vosp)**. We are also glad that they recognized **our innovations in theory and methods (Reviewers qXeH, Vosp), our comprehensive experiments (Reviewers LGeA, qXeH, VYLm, Vosp), and clear writing (reviewer LGeA)**. We have made our best efforts to respond to each suggestion, and we are very happy to further communicate if the reviewers have any questions. In addition, to deepen the reviewers' understanding of this paper, we will explain the interpretability and controllability of our method from the following perspectives:
**Perspective 1: Multi-expert structure enables different experts to focus on different classes**
This paper employs the same three experts with different loss functions as SADE: the forward expert tends to adapt to the long-tailed distribution similar to the training set. The uniform expert tends to adapt to the distribution with balanced classes. The reverse expert tends to adapt to the reverse long-tailed distribution with fewer head classes and more tail classes.
Through theoretical analysis, we can obtain the optimal solution for each expert, which corresponds to different class distributions:
- Forward expert:
$v_{1}^{*} (x) = \arg \max_{v_{1}} \mathbb{E}_ {(x,y) \sim p_{train}(x,y)} [ \log p(y\|x; v_{1}) ]$
- Uniform expert:
$v_2^*(x) = \arg\max_{v_2} \mathbb{E}_ {(x,y) \sim p_{uniform}(y)p_{train}(x|y)} [\log p(y|x; v_2)]$
- Reverse expert:
$v_3^*(x) = \arg\max_{v_3} \mathbb{E}_ {(x,y) \sim p_{inv}(y)p_{train}(x|y)} [\log p(y|x; v_3)]$
These results demonstrate that the multi-expert structure indeed enables different experts to focus on different classes, providing a foundation for the model to adapt to different distributions.
**Perspective 2: Dirichlet sampling and hypernetwork enable the model to adapt to different class distributions**
During the training process, we sample the weight vector $\boldsymbol{\alpha}=(\alpha_1, \alpha_2, \alpha_3)$ from the Dirichlet distribution $p(\boldsymbol{\alpha}; \boldsymbol{\beta})$, and weight the outputs of the three experts:
$$v(x) = \sum_{i=1}^3 \alpha_i v_i(x)$$
By adjusting the parameters of the Dirichlet distribution $\boldsymbol{\beta}$, we can control the distribution of the sampled weight vector to simulate different class distributions:
- When $\beta_1 > \beta_2 = \beta_3$, simulate the long-tailed distribution.
- When $\beta_1 = \beta_2 = \beta_3$, simulate the uniform distribution.
- When $\beta_1 < \beta_2 = \beta_3$, simulate the reverse long-tailed distribution.
Moreover, we introduce a hypernetwork $h_{\boldsymbol{\phi}}(\boldsymbol{\alpha})$ to map the sampled weight vector to the parameters of the experts:
$$\boldsymbol{\theta}_ {i} = h_{\boldsymbol{\phi}}(\boldsymbol{\alpha})_i, \quad i = 1, 2, 3$$
By optimizing the hypernetwork to minimize the expected loss:
$$\min_{\boldsymbol{\phi}} \mathbb{E}_ {\boldsymbol{\alpha} \sim p(\boldsymbol{\alpha}; \boldsymbol{\beta})} \left[ \frac{1}{n_s} \sum_{(x, y) \in D_s} \ell(y, v(x; h_{\boldsymbol{\phi}}(\boldsymbol{\alpha}))) \right]$$
We can obtain a set of expert parameters that perform well under different weight vectors. From an optimization perspective, this is equivalent to minimizing the expected loss of the ensemble model on the test distribution:
$$\min_{\boldsymbol{\phi}} \mathbb{E}_ {\boldsymbol{\alpha} \sim p(\boldsymbol{\alpha}; \boldsymbol{\beta})} \left[ \mathbb{E}_ {(x, y) \sim p_{test}(x, y)} \left[ \ell(y, v(x; h_{\boldsymbol{\phi}}(\boldsymbol{\alpha}))) \right] \right]$$
This shows that through Dirichlet sampling and hypernetwork optimization, we can obtain an ensemble model that adapts to various class distributions and performs well under different distributions. This adaptive ability enhances the robustness of the model, enabling it to cope with distribution changes in real-world scenarios.
**Perspective 3: Learning the entire frontier through Pareto optimization**
When optimizing the hypernetwork, we are actually learning the entire Pareto frontier. This is because we optimize the expected loss of the ensemble model under the weight vector sampled from the Dirichlet distribution. Different weight vectors correspond to different points on the frontier, representing different class distributions. By minimizing the expected loss, we are essentially balancing all these distributions and learning the entire Pareto frontier.
Theoretically, if a solution $\boldsymbol{\theta}(\boldsymbol{\alpha})$ is a local Pareto optimal solution to the expected loss optimization problem, then in a neighborhood of $\boldsymbol{\alpha}$, we can find a smooth mapping $\boldsymbol{\theta}(\boldsymbol{\alpha})$ that is Pareto optimal in the entire neighborhood. This result supports that our method can learn a set of continuous Pareto optimal solutions covering the entire Pareto frontier, providing flexibility to adapt to different practical needs.
**Perspective 4: Advantages over previous methods**
Compared with previous long-tailed learning methods, our method has several advantages in interpretability and controllability:
- Through the multi-expert structure and hypernetwork, our method can adapt to different class distributions.
- By learning the entire Pareto frontier, our method can control the behavior of the model by adjusting the weight vector to achieve dynamic adaptation to different class distributions. This controllability is not available in previous methods.
- We provide some relatively rigorous and coherent theoretical analyses to support the effectiveness of the method and enhance the interpretability of the model.
Pdf: /pdf/fef6d3bd7f057f3d1fb24a9a517ec77279ed54d6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Towards a Unified Framework of Clustering-based Anomaly Detection | Reject | Summary: The paper addresses unsupervised anomaly detection by proposing a method named UniCAD. The authors aim to enhance anomaly detection performance by establishing a theoretical connection between representation learning, clustering, and anomaly detection. They introduce a unified framework that jointly optimizes these three components, using a probabilistic mixture model and a Student's-t distribution for robust representation learning and clustering. The framework also includes an anomaly-aware data likelihood objective, which reduces the impact of anomalous data on the learning process. Additionally, the authors propose a gravity-inspired anomaly scoring method that leverages relationships between samples and clusters.
Strengths: 1. Modeling the connection between representation learning, clustering, and anomaly detection is highly relevant. This paper effectively demonstrates how these three tasks are interrelated and can be jointly optimized to improve anomaly detection performance.
2. The paper is well-written, presenting its hypothesis and method clearly.
3. The results are impressive and demonstrate the effectiveness of UniCAD.
Weaknesses: 1. The ablation study on the hyperparameters $k$ and $l$ is insufficient. The authors only present results from a single dataset, satimage-2, where their method achieves an almost perfect score. It would be more informative to perform ablation studies across all 30 datasets or at least a subset where the model also shows lower performance. This broader analysis would demonstrate how these hyperparameters affect the average ranking of the method, similar to the results reported in the paper's table.
2. The authors introduce a $g(\Theta)$ term to prevent shortcut solutions, mentioning it in Equation 15. However, they do not discuss its importance or impact on performance after its introduction. Key questions remain unanswered, such as how the $g(\Theta)$ term affects the model's performance, what happens if it is removed, and how the autoencoder is implemented. These details are crucial, as the regularization term may significantly influence the results.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors mention some of the limitations, but they do not address the potential negative impact of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The ablation study on the hyperparameters 𝑘 and 𝑙 is insufficient. The authors only present results from a single dataset, satimage-2, where their method achieves an almost perfect score. It would be more informative to perform ablation studies across all 30 datasets or at least a subset where the model also shows lower performance. This broader analysis would demonstrate how these hyperparameters affect the average ranking of the method, similar to the results reported in the paper's table.
Thank you for your valuable feedback. Regarding the ablation study on the hyperparameters (k) and (l), we have tested various values for these parameters and found that the method exhibits a degree of robustness to their specific settings within a reasonable range. As stated in L243, our method utilizes a fixed set of parameters (k=10, l=1%) to ensure a fair comparison. We found that for most datasets, the method performs well with these settings.
Specifically, we conducted a grid search over the following coarse-grained parameter space and compared them across 30 datasets against 17 baseline methods. Indeed, for specific datasets, further tuning of hyperparameters can enhance performance. The detailed results, which represent the average ranking of the method in terms of AUC-ROC, are as follows:
| l\k | 10 | 20 | 30 | 40 |
| ---- | ---- | ---- | ---- | ---- |
| 0.01 | 3.34 | 4.31 | 4.69 | 4.71 |
| 0.05 | 4.44 | 4.23 | 4.65 | 4.88 |
| 0.10 | 4.27 | 4.46 | 4.48 | 4.88 |
Additionally, we have explored providing guidelines for selecting these hyperparameters. While methods like the elbow method and silhouette coefficient were considered to find the optimal cluster number, they proved time-consuming and not strongly correlated with anomaly detection performance. Instead, an ensemble learning approach, involving random searches of (k) values and aggregating anomaly scores, improved performance on certain datasets and model robustness. We plan to continue this research in future studies.
> The authors introduce a 𝑔(Θ) term to prevent shortcut solutions, mentioning it in Equation 15. However, they do not discuss its importance or impact on performance after its introduction. Key questions remain unanswered, such as how the 𝑔(Θ) term affects the model's performance, what happens if it is removed, and how the autoencoder is implemented. These details are crucial, as the regularization term may significantly influence the results.
Thank you for your valuable feedback regarding the introduction of the term $ g(\Theta) $ in our model. We appreciate your insights and would like to address your concerns as follows:
1. Importance and Impact of g(Θ)
The constraint term $ g(\Theta) $ is indeed a fundamental and important part of the model. Using only $ J(\Theta, \Phi) $ can lead to shortcut solutions, causing the loss to quickly become infinite, thereby preventing effective optimization of the deep network parameters. As shown in the ablation experiments in Table 3, we aim to demonstrate that having only $ g(\Theta) $ or only $ J(\Theta, \Phi) $ is insufficient; the combination of $ g(\Theta) + J(\Theta, \Phi) $ significantly enhances the model's performance.
2. Implementation of the autoencoder
Our autoencoder consists of two main components: the encoder and the decoder. The encoder compresses the input data of dimension to a lower-dimensional representation of size. The decoder then converts this compressed representation back to the original data space of dimension. Both the encoder and decoder are implemented using fully connected neural networks with ReLU activation functions. The autoencoder is trained using a mean squared error loss function, which measures the difference between the original input and the reconstructed output.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for addressing my concerns. I have read the author's responses and all other reviews, and I am keeping my rating the same.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for reading our rebuttal!
We are very pleased that your concerns have been addressed. We will update the content mentioned in our rebuttal in the camera-ready version.
Thank you again for your valuable suggestions! | Summary: This paper proposes UniCAD, a theoretically unified framework for representation learning, clustering, and anomaly detection. This paper first introduces the mixture of Student-t distribution $p(x|\Theta, \Phi)$ with degree of freedom $\nu=1$ based on a representation learner $f_\Theta$ using NN. Then, this paper combines with an anomaly indicator $\delta$ for maximum likelihood estimation. Parameters $(\Theta, \Phi)$ are optimized by EM algorithm and SGD. In addition, when detecting anomalies, an improved score is used with reference to gravity. The UniCAD achieved good performance on experiments with various datasets.
Strengths: - This paper is well written and easy to follow.
- Good experimental results.
Weaknesses: - We have several questions about the proposed method and experiments. Please see Qustions.
- The comparison with DeepSVDD and DIF is excellent, but I think the paper also needs to be compared with other Deep anomaly detection methods. For example, DROCC [1].
[1] Goyal, Sachin, et al. "DROCC: Deep robust one-class classification. "International conference on machine learning. PMLR, 2020.
Technical Quality: 2
Clarity: 3
Questions for Authors: - if the degrees of freedom is fixed to 1, the benefit of the t-distribution disappears. Why not learn the degrees of freedom as well?
- I think $\tilde{F}_{ik}$ is a scalar and the norm of $\tiled{r}_{ik}$ is 1, so I don't see the difference between Eq. (9) and Eq. (8).
- As shown in Figure 2, I think the proposed method strongly depends on the hyperparameters $l$ and $k$. Is there any criteria for setting these values?
- The method in this paper seems incremental. (Especially until 3.1.) I think the main novelty is the gravity-based anomaly score described in 3.2. To what extent does this anomaly score improve performance compared to the regular anomaly score? Also, can you give a theoretical explanation?
If the above concerns are remedied and a comparison is made between SOTA's Deep Anomaly Detection methodology, I intend to raise the score.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: - Hyper-parameter sensitivity seems to be one limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The comparison with DeepSVDD and DIF is excellent, but I think the paper also needs to be compared with other Deep anomaly detection methods. For example, DROCC \[1\].\[1\] Goyal, Sachin, et al. "DROCC: Deep robust one-class classification. "International conference on machine learning. PMLR, 2020.
We are grateful for your suggestion to include comparisons with other deep anomaly detection methods, particularly DROCC.
- Although our initial classification included only two NN-based methods, several other methods in different categories also employ representation learning, such as DAGMM and DCFOD. We will revise this classification to avoid any potential misunderstandings.
- We have included an **additional NN-based comparison method (DROCC)** in our experiments. As shown in our global response, our model significantly outperforms it. However, we appreciate your recommendation and will include and compare this method in the revised version.
- Furthermore, as shown in Appendix E, in the experiments on graph data, we compared our method with **13 deep anomaly detection approaches** based on representation learning.
> if the degrees of freedom is fixed to 1, the benefit of the t-distribution disappears. Why not learn the degrees of freedom as well?
Thank you for your insightful comment regarding the fixed degrees of freedom in our model.
Inspired by existing works\[1~2\], the cross-validating of v on the validation set or learning it is optional, especially in the unsupervised setting. However, in our work, we chose to fix it to 1 to simplify the model. This decision was made to reduce complexity and computational overhead while still maintaining robust performance.
| Metric | learn v | fix v=1 |
| --------------- | ------- | ------- |
| AUCROC Avg.rank | 4.4 | 3.34 |
| AUCPR Avg.rank | 5.05 | 4.47 |
> I think \tilde{F}\_{ik} 𝑖𝑠 𝑎 𝑠𝑐𝑎𝑙𝑎𝑟 𝑎𝑛𝑑 𝑡ℎ𝑒 𝑛𝑜𝑟𝑚 𝑜𝑓 \tiled{r}\_{ik} is 1, so I don't see the difference between Eq. (9) and Eq. (8).
We appreciate your attention to detail regarding the mathematical formulation. To clarify, Eq. (8) represents a scalar sum, which is a constant addition, while Eq. (9) involves a vector sum where we take the norm of the vector.
The key difference lies in **whether the direction of the sample and the cluster in the representation space is considered**. Clusters in different directions are considered contradictory when estimating the anomaly degree of a sample and will cancel each other out. In Appendix C.1, we also provide an example for intuitive understanding.
> As shown in Figure 2, I think the proposed method strongly depends on the hyperparameters 𝑙 and 𝑘. Is there any criteria for setting these values?
Thank you for your suggestion.
In our experiments, we have tested various values for these parameters and found that **the method exhibits a degree of insensitivity to their specific settings within a reasonable range**. As stated in L243, our method utilizes a fixed set of parameters to ensure a fair comparison (k=10, l=1%). We found that **for most datasets, the method performs well with these settings.**
We have also explored several criteria for selecting these hyperparameters, including the elbow method, silhouette coefficient, and ensemble of random search. We found that they are time-consuming and **not strongly correlated** with anomaly detection performance.
In the future, we will further explore strategies such as Outlier Detection Thresholding \[3\] in future studies.
> The method in this paper seems incremental. (Especially until 3.1.) I think the main novelty is the gravity-based anomaly score described in 3.2. To what extent does this anomaly score improve performance compared to the regular anomaly score? Also, can you give a theoretical explanation?
We appreciate your observation regarding the novelty of the gravity-based anomaly score.
The gravity-based anomaly score represents a significant advancement over traditional anomaly scores due to its ability to **leverage the complex relationships among samples and clusters**. Unlike conventional scores, which often rely on heuristic designs, our approach is **grounded in a theoretical framework** that connects clustering and anomaly detection through posterior probabilities and likelihood estimations. This theoretical underpinning allows our score to more effectively capture the nuances of data distributions, leading to improved anomaly detection performance.
In our comparative experiments presented in Table 1, we discuss the impact of different scoring methods. The results indicate that our gravity-based anomaly score ranks higher on average compared to traditional scores across 30 datasets, demonstrating its **versatility and effectiveness** in diverse scenarios.
Additionally, we provide an intuitive explanation of the gravity-based anomaly score in Appendix C.1, where we illustrate its advantages through a toy example. This example highlights how our score can better identify group anomalies, which are often challenging for traditional methods to detect.
**References:**
\[1\] Laurens Van Der Maaten. Learning a parametric embedding by preserving local structure. In Artificial intelligence and statistics, pages 384–391. PMLR, 2009.
\[2\] Junyuan Xie, Ross Girshick, and Ali Farhadi. Unsupervised deep embedding for clustering analysis. In International conference on machine learning, pages 478–487. PMLR, 2016.
\[3\] Perini L, Bürkner P C, Klami A. Estimating the contamination factor’s distribution in unsupervised anomaly detection\[C\]//International Conference on Machine Learning. PMLR, 2023: 27668-27679.
---
Rebuttal 2:
Title: Thanks for the rebuttal.
Comment: My concerns have been addressed to some extent, especially with the additional experiments involving DROCC. While the method itself appears to be incremental, its performance on tabular data is excellent. I will raise my score. | Summary: This paper introduces a novel probabilistic mixture model for unsupervised anomaly detection (UAD) that unifies representation learning, clustering, and anomaly detection into a single theoretical framework. The proposed UniCAD model addresses the lack of a unified approach in existing methods, which often consider these components separately or in pairs. The experimental results show that UniCAD consistently outperformed other methods in terms of AUC-ROC and AUC-PR. The model’s iterative optimization process using EM was also highlighted as effective and convergent.
Strengths: - This paper introduces a novel integration of a probabilistic mixture model that unifies representation learning, clustering, and anomaly detection into a single theoretical framework.
- The proposed approach is well-motivated (Fig. 1) and supported by a robust theoretical foundation that maximizes anomaly-aware data likelihood, ensuring the model effectively leverages the interplay between representation learning, clustering, and anomaly detection.
- The paper is well-written, offering clear and comprehensive explanations of the proposed method, including detailed theoretical derivations and intuitive motivations for the design choices. The methodology section is particularly well-structured, logically outlining the steps and equations involved in the proposed model.
- The comprehensive evaluation design underscores the robustness of the proposed method.
Weaknesses: - The connection between force analysis and anomaly detection, particularly between Equations 7 and 8 in Section 3.2.1, could benefit from further justification. While the analogy is interesting, it may not be immediately intuitive for all readers.
- The iterative optimization process may pose scalability issues for large datasets. An in-depth analysis and discussion of this would further strengthen the quality of this research.
- Although the model maps data to a low-dimensional representation space, the effectiveness of this mapping for very high-dimensional datasets could be explored further.
Technical Quality: 4
Clarity: 4
Questions for Authors: I have no specific questions.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback. We will address each of the weaknesses and suggestions you mentioned.
> The connection between force analysis and anomaly detection, particularly between Equations 7 and 8 in Section 3.2.1, could benefit from further justification. While the analogy is interesting, it may not be immediately intuitive for all readers.
Thank you for your suggestion! We share similar concerns and, due to space limitations, we have provided **an intuitive explanation** of the advantages of this analogy in Appendix C, along with **a toy example** to illustrate it more vividly. In this example, we offer a detailed explanation of the differences between scalar and vector concepts.
> The iterative optimization process may pose scalability issues for large datasets. An in-depth analysis and discussion of this would further strengthen the quality of this research.
Thank you for your valuable feedback regarding the iterative optimization process and its potential scalability issues for large datasets. We have analyzed the computational complexity in **Appendix D.4.** The time complexity for t iterations is **O(tN(logN + Td(D + K)))**. According to our complexity and run-time analysis, our method is scalable to large datasets.
In the future, we will explore two techniques that can help reduce the computational burden when processing large datasets: 1. Training on multiple manageable-sized data subsets and combining their scores using ensemble methods. 2. The Mini-Batch EM algorithm, which uses only a small batch of the dataset in each iteration.
> Although the model maps data to a low-dimensional representation space, the effectiveness of this mapping for very high-dimensional datasets could be explored further.
Thank you for your suggestion. We agree that this is an important consideration. In fact, we have addressed this issue from two perspectives:
- On one hand, we can directly map high-dimensional data to a lower dimension using deep neural networks, which helps alleviate the problem of high dimensionality.
- Additionally, as shown in Appendix E, we can also easily extend to higher-dimensional graph data, such as the Weibo dataset with 400 feature dimensions, by simply replacing the corresponding backbone, demonstrating competitive performance.
---
Rebuttal Comment 1.1:
Title: Response from Reviewer BFMu
Comment: I appreciate the effort the authors put into preparing the rebuttal. I have no further comments and will be keeping my rating as it is. | Summary: The authors propose UniCAD to jointly model representation learning,
clustering and anomaly detection. The main objective is maximizing
the product of anomaly indicator (1 is normal, 0 is anomaly) and the
joint probability of instance x_i and cluster c_k given parameters for
representation learning theta and clustering phi. The joint
probability is decomposed into the prior of c_k and likelihood of
p(x_i|c_k), which is modeled by a Student's-t distribution on the
distance between representation z_i and mean mu_k with covariance
Sigma_k. p(x_i) is the marginal over c_k. Anomaly indicator delta is
zero for p(x_i) in the lowest l percentage. The anomaly score is
1/p(x_i).
Compared to Newton's law of Universal Gravitation, the anomaly score
function has similar components, except for the unit vector r_ik
(which indicates the directions of forces, beyond the
magnitudes). Hence, they incorporated r_ik into their anomaly score
function.
For updating the clustering parameter phi (mixture weights, means,
covariance), they use EM. In the E-step, they estimate the posterior
p(c_k|x_i). In the M-step, they estimate phi. For updating
representation parameters theta, they use gradient descent to minimize
negative log likelihood of instances, together with a reconstruction
loss via an autoencoder to prevent shortcut solutions.
For empirical comparisons, they use 30 tabular data sets and 17
existing algorithms. The proposed approach generally outperforms the
others in terms of average rank in AUCROC. The vector version of
anomaly score function is ranked higher than the scalar version. On
computation time, UniCAD is in the middle among 5 algorithms. Ablation
studies indicate the contributions of the different components.
Strengths: The main contribution is combining representation learning,
clustering, and anomaly detection in a unified single probabilistic
formulation, which is interesting.
The empirical results indicate that UniCAD compares favorably against
17 existing techniques on 30 tabular datasets. Compared to four
existing algorithms, computation is not the most intensive.
The paper is generally well written.
Weaknesses: The clustering part is similar to a typical Gaussian mixture model for clustering via EM, except for t-distribution instead of Gaussian and the scaling factor.
Two neural-network-based approaches were compared. As UniCAD utilizes
representation learning, comparing with more approaches that utilize
representation learning would be significant. Approaches without
representation learning have an inherent disadvantage.
Some parts could be clarified--see Questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: Eq 11: What is the motivation for the scale factor u_ik, used in Eq 13
and 14?
How is K, number of clusters, determined?
Since the method has representation learning, how does the method
prevent the trivial solution of having most/all instances in one
cluster?
Sec 3.2.1: Consider (a simple case of) only two clusters with the same
covariance and "mass", but different centroids. If an instance is in the middle
between the two clusters, the two r_ik vectors will be in opposite directions,
resulting in an anomaly score of zero. In gravitational forces, the
two opposite forces cancel out. However, in anomaly detection, that
might not be desirable, particularly, when the two clusters are far
away. Any insights?
What is the "schedule" for updating phi and theta? Updating one to
convergence before updating the other to convergence?
If one "round" is updating phi to convergence and then theta to
convergence? Are there multiple rounds? If so, what is the overall
stopping criterion?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations of the proposed approach do not seem to be mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewers' valuable suggestions for our work. We hope the following responses will clarify any doubts and enhance the quality of our paper.
> The clustering part is similar to a typical Gaussian mixture model for clustering via EM, except for t-distribution instead of Gaussian and the scaling factor.
We acknowledge that the clustering part seems similar to a typical Gaussian Mixture Model (GMM). However, the primary reason we use the Student's t-distribution instead of the Gaussian distribution is **its robustness to outliers**. As mentioned in Section 3.1.1, the introduction of the Student's t-distribution allows the model to perform better when dealing with high-variance data, especially in the presence of outliers. Our ablation study (Table 3) also demonstrates that using the Student's t Mixture Model (SMM) **significantly improves the average performance** of the method compared to GMM.
> Two neural-network-based approaches were compared. As UniCAD utilizes representation learning, comparing with more approaches that utilize representation learning would be significant. Approaches without representation learning have an inherent disadvantage.
Thank you for your valuable suggestion. We also place great importance on comparing our method with various anomaly detection approaches that utilize representation learning.
- We have included **an additional NN-based comparison method** (DROCC) in our experiments. As shown in our general response, our model significantly outperforms it.
- Furthermore, on graph data, we compared our method with **13 deep anomaly detection approaches** based on representation learning.
> Eq 11: What is the motivation for the scale factor u\_ik, used in Eq 13 and 14?
Thank you for your insightful question.
The primary motivation for introducing the scale factor $ u_{ik} $ is to **downweight the influence of outliers in the data when estimating the parameters of the mixture model**. This approach is also grounded in the optimization derivation presented in the paper \[1\].
In the M-step of the EM algorithm, the estimates for the component means $ \mu_i^{(k+1)} $ and covariance matrices $ \Sigma_i^{(k+1)} $ are computed using weighted averages, where the weights are given by $ u_{ij}^{(k)} $. This allows the model to give less importance to observations that are likely to be outliers, thereby improving the robustness of the parameter estimates.
> How is K, number of clusters, determined?
The selection of cluster numbers, in the absence of prior knowledge, is indeed a topic worthy of in-depth discussion. However, according to our hyperparameter analysis, we found that **k=10 generally adapts well to most evaluated datasets.**
Furthermore, we have considered the searching strategy such as elbow method, silhouette coefficient\[2\] and ensemble of random searches in our works, but we found that they are **time-consuming and not strongly correlated with anomaly detection performance**.
We will explore this active direction in future studies.
> Since the method has representation learning, how does the method prevent the trivial solution of having most/all instances in one cluster?
Thank you for highlight this critical question in deep clustering. Indeed, we have some careful design for avoiding this problem.
The key to our model's ability to avoid this issue lies in its **unified optimization objectives** for representation learning and clustering. Using the maximum likelihood of a mixture model as the objective inherently prevents all samples from being assigned to a single cluster, as this would significantly reduce the overall likelihood. Optimizing the maximum likelihood objective leads to **better results with a mixture model compared to a single cluster**. Consequently, the model will naturally distribute samples across multiple clusters.
> Sec 3.2.1: ... in anomaly detection, that might not be desirable, particularly, when the two clusters are far away. Any insights?
Thank you for raising this insightful question. In fact, the case you mentioned is precisely what we consider to be **the most anomalous situation**. Since our anomaly score is **inversely proportional** to the **resultant force**—when the resultant force is minimal, the anomaly score is maximized. As a result, the instance in the situation you described will be recognized as an anomaly with the highest score.
Furthermore, the toy example in Figure 3 illustrates a similar case, showing that the model can effectively detect anomalies situated at the intersection of multiple clusters.
> What is the "schedule" for updating phi and theta? Updating one to convergence before updating the other to convergence?If one "round" is updating phi to convergence and then theta to convergence? Are there multiple rounds? If so, what is the overall stopping criterion?
We appreciate your insightful comments. In practice, we iteratively optimize $\Phi$ and $\Theta$ using tolerance λ and iterations t. Due to space constraints in the main text, more detailed optimization procedures can be found in Algorithm 1 in Appendix A.
- For the parameters $\Phi$ of the mixture model, we adopt a log-likelihood convergence strategy. The algorithm stops when the increase in log-likelihood between iterations falls below a predefined threshold.
- For the parameters $\Theta$ of the deep model, we use a maximum iterations strategy. The algorithm stops after a predefined number of iterations is reached.
**References:**
\[1\] David Peel and Geoffrey J McLachlan. Robust mixture modelling using the t distribution. Statistics and computing, 10:339–348, 2000.
\[2\] Shi C, Wei B, Wei S, et al. A quantitative discriminant method of elbow point for the optimal number of clusters in clustering algorithm\[J\]. EURASIP journal on wireless communications and networking, 2021, 2021: 1-16.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response
> We have included an additional NN-based comparison method (DROCC) in our experiments. As shown in our general response, our model significantly outperforms it.
To include more comparisons with methods with representation learning, I suggest reducing the ones without representation learning.
> Furthermore, on graph data, we compared our method with 13 deep anomaly detection approaches based on representation learning.
Where is that?
> Optimizing the maximum likelihood objective leads to better results with a mixture model compared to a single cluster.
What is the main reasoning?
> we consider to be the most anomalous situation. Since our anomaly score is inversely proportional to the resultant force.
In my simple example, why is an instance between two centrods the most anomalous situation. Wouldn't an instance very far away from the two centroids be more anomalous?
---
Reply to Comment 1.1.1:
Comment: > To include more comparisons with methods with representation learning, I suggest reducing the ones without representation learning.
Thank you for your suggestions. We will incorporate them into the revised version.
> Where is that?
We apologize for not clearly stating the location, these results are placed in **Appendix E**.
> What is the main reasoning?
Thanks for your further question. The main reason is that the whole dataset consists of several clusters, thus the assumption of mixture model can better fit the dataset distribution than single model. This is also supported by the research such as GMM\[1\]. According to the Approximation Theorem, a mixture model **can approximate any continuous probability density function by increasing the number of distributions**. If all samples were assigned to a single cluster, there **would still be room for improvement in the maximum likelihood solution**.
As a result, maximizing the likelihood of mixture model can help improve the clustering performance while avoiding the trivial solution that assign all samples into single cluster.
> In my simple example, why is an instance between two centroids the most anomalous situation. Wouldn't an instance very far away from the two centroids be more anomalous?
We apologize for the misunderstanding caused by the use of "most" in the previous response. In our newly proposed anomaly score, **both situations you described—an instance between two centroids and an instance far away from the centroids—would be assigned a high anomaly score**.
In the example you provided, due to the opposing effects of the two centroids, the resultant force $\vec{{\mathbf{F}}}\_{i}$ is smaller, resulting in a higher anomaly score ${o}\_i^V$, meaning that samples with ambiguous category semantics are more likely to be identified as anomalies. Conversely, the probability-based anomaly score fails to detect this type of anomaly, making it less adaptable to a broader range of data.
**References:**
\[1\] Reynolds, Douglas A. "Gaussian mixture models." Encyclopedia of biometrics 741.659-663 (2009). | Rebuttal 1:
Rebuttal: We sincerely appreciate the positive feedback from most reviewers on our paper, as well as the very useful suggestions from different aspects for further improving the quality of our work.
In the rebuttal, we have carefully read the reviews and provided corresponding answers in each individual replies.
We greatly value and appreciate the valuable questions of the reviewers, and we hope to make full use of the discussion phase to engage in in-depth discussions with the reviewers. Therefore, if there are any further suggestions and questions, we sincerely hope the reviewers can bring them up. We will also do our utmost to discuss and further improve our work.
Pdf: /pdf/d1c3489aad88e353614c01ee0fff646b68b0ec38.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Interpret Your Decision: Logical Reasoning Regularization for Generalization in Visual Classification | Accept (spotlight) | Summary: The paper introduces a logical regularization method called L-Reg, aimed at enhancing the generalization ability of image classification tasks through a logical reasoning framework. L-Reg effectively simplifies the complexity of the model by ensuring that the generated atomic formulas align with the logical relationships between the images and labels, promoting balanced distribution in the feature space, and reducing the number of extreme weight values in the classifier. Both theoretical analysis and experimental results validate the effectiveness of L-Reg in various generalization scenarios, especially demonstrating outstanding performance in multi-domain generalization and generalized category discovery tasks.
Strengths: 1. L-Reg effectively reduces the complexity of the model by balancing the feature distribution and reducing the number of extreme weight values in the classifier.
2. In Sections 3.1 and 3.2, the paper provides logic-based theoretical analysis and detailed derivation of the construction process of L-Reg. Furthermore, through experiments in Sections 4 and 5, the effectiveness of L-Reg in different generalization settings, especially in multi-domain generalization and generalized category discovery tasks, is validated.
3. Designed as a plug-in loss function, L-Reg is compatible with most existing frameworks, making it highly flexible and practical in real-world applications.
4. The paper also explores the relationship between logical reasoning and visual classification tasks, delving into the derivation of logic-based regularization terms to promote generalization, providing new perspectives and methods for research in related fields.
Weaknesses: 1. L-Reg may reduce the scope of semantic support, leading to a slight performance decrease on known datasets. It is hoped that the authors can analyze in more detail the reasons for this performance drop and provide possible improvement methods.
2. The appropriate function g for generating atomic formulas is a key factor. While the authors propose L-Reg as a regularization method to ensure that F(g(Xs), Ys) consists of atomic formulas, they do not elaborate on how to choose or design this function g. Further explanation on the specific methods or criteria for selecting g is desired.
3. In multi-domain generalization tasks, how L-Reg integrates with existing methods and how to adjust alpha to balance the two losses are details that the authors are encouraged to further elucidate.
Technical Quality: 3
Clarity: 3
Questions for Authors: I hope the authors can further compare L-Reg with other regularization methods in terms of generalization ability and interpretability to highlight the advantages of L-Reg.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We really appreciate your insightful comments, and we address your weaknesses and questions point-by-point.
**W1.** Thank you for your insightful comment. As discussed in Paper Lines 344-358, L-Reg relies on the precondition that each dimension of the semantic features represents independent semantics. When this condition is not met, applying L-Reg can lead to performance degradation. This is because the atomic formulas constructed for different classes could become sub-optimal if the minimal semantic supports for different classes correlate with each other. In cases where improper $z$ is used, especially between known and unknown classes, the model may struggle to effectively filter out irrelevant features for unknown classes, which can result in features that inadvertently overlap with the minimal semantic supports for known classes, resulting in degradation for them.
To address this issue, we hypothesize that enforcing independence between $z^i$ and $z^j$ could lead to further improvements. To test this hypothesis, we conducted experiments using orthogonality regularization (Ortho-Reg) to enforce feature independence in mDG and GCD tasks. As shown in PDF Tab.2,3&4, results indicate that while Ortho-Reg alone may not be very effective, combining L-Reg with Ortho-Reg leads to significant improvements.
Based on these findings, we propose that the performance drop observed could be mitigated further with a well-designed model architecture or additional regularization techniques that enhance independence between $z^i$ and $z^j$. We find this to be a very attractive and promising area of research, and we are eager to explore it further in future work.
**W2.** We appreciate your comment. To ensure a fair comparison and align with previous work, we use the same encoder, $g$, as employed by earlier studies to validate our L-Reg approach. Insights into the selection or design of $g$ can be found in our response to W1.
As noted, L-Reg relies on the condition that the semantic features $z^i$ and $z^j$ (for $i \neq j$) are independent. Models that achieve an orthogonal semantic space are thus well-suited for applying L-Reg, as they naturally align with this requirement.
**W3.** hank you for highlighting this point; it has been very insightful. Currently, we use a plain strategy of selecting the regularization weights in the range of [0.01, 0.001, 0.0001], keeping them relatively small compared to the scale of other losses (approximately 1:10). We appreciate your comments and acknowledge that there may be deeper theoretical or empirical insights related to this issue. We plan to explore this further in future research.
**Q1.** Thank you for your recommendation.
For a thorough evaluation, we compare our L-Reg to the aforementioned Ortho-Reg and a sparsity-based regularization approach. To validate this fairly, we re-implement the Ortho-Reg and Bernoulli sample of the latent features from the sparse linear concept discovery models [3] on the same PIM backbone that we used. The results, presented in PDF Tab.2\&3, indicate that L-Reg consistently yields the most significant improvements when these regularization terms are applied alone. Additionally, as mentioned, combining L-Reg with Ortho-Reg further enhances performance since a more proper $z$ is obtained.
---
Rebuttal 2:
Comment: The authors' rebuttal dispelled some of my concerns, and I choose to maintain the score.
---
Rebuttal Comment 2.1:
Title: Thanks for response
Comment: We greatly appreciate your kind feedback and insightful comments. Those comments have significantly helped us improve our paper. | Summary: This paper addresses the multi-domain generalization (mDG), generalized category discovery (GCD), and the more challenging mDG+GCD task. The authors introduce a logical reasoning-based regularization term called L-Reg, which bridges logical analysis with image classification to enhance model interpretability and reduce complexity. The main idea of L-Reg is to identify a minimal set of semantics that can deduce the relationship between $x$ and $y$, i.e., the semantic support. Theoretical analysis and experiments demonstrate that L-Reg improves generalization across mDG, GCD, and mDG+GCD.
Strengths: 1. This paper establishes a connection between logical reasoning and practical visual generalization problems, bringing novel insights for improving DG and GCD.
2. The proposed L-Reg used over existing SOTA methods can further improve the SOTA performance.
3. The experiments are comprehensive. The visualization results provide a good empirical understanding of the role of L-Reg.
Weaknesses: 1. The presentation of this article is somewhat confusing for readers unfamiliar with logical reasoning, especially from Section 3.1 to Section 3.2. It would be better to add intuitive explanations of key concepts. For example, why should $F$ be atomic formulas for a logic to be a 'good general' one, or, what do the atomic formulas refer to in a real-world case?
2. The mDG improvements of L-Reg are over GMDG. The performance of directly applying L-Reg is not shown, i.e., ERM+L-Reg.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Line 145 claims that "Semantics that occur frequently across samples often lack decisiveness for classification." However, samples from the same class often share the semantic predictive of the class.
2. The definition of the "minimal semantics" $z_i$ is vague. The meaning of $Z_i$ is also unclear in Eq. (3).
3. Could you provide a concrete implementation of L-Reg? I wonder which layer(s) is(are) chosen for computing Eq. (3) in practice? Are the results sensitive to the selection of layers?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We really appreciate your insightful comments, and we address your weaknesses and questions point-by-point. Some points of weaknesses and questions are combined because they are very associated with each other.
**W1&Q2.** Thank you so much for your comments, and sorry for any confusion caused by our writing. We will follow your kind suggestions and add some intuitive explanations for key concepts in our paper. Please also kindly refer to Reply to All Reviewers for the theoretical analysis of atomic formulas and their relationship to the interpretability of L-Reg.
To address your questions more concretely, we use Paper Fig. 5 as a typical example to provide more analysis on atomic formulas and the interpretability of L-Reg.
For the known classes, the efficacy of L-Reg can be intuitively understood as extracting the minimal semantic supports for a given class label. As examples shown here, the presence of a guitar's fingerboard, even in unseen domains, helps classify a sample as belonging to the guitar category, whose informal form can be denoted as $h(F_{(\gamma=\text{has the fingerboard}, y=\text{guitar})},d\in D)\rightarrow\text{True}$. For all known classes, samples with these minimal semantic supports are recognized accordingly.
In contrast, if a sample lacks these minimal supports for any known class, it is very likely categorized as an unknown class. This behavior stems from Paper Eq.10 which ensures $\mathcal{A}^{y_i*} \neq \mathcal{A}^{y_j*}$ through constraining $\gamma^{y_i} \neq \gamma^{y_j}$. L-Reg further enhances the model's ability to identify minimal supports for unknown classes by filtering out co-covariant features associated with other classes and thus generalizing to unseen domains. Therefore, the very interpretable features for unknown classes from unseen domains can be extracted using L-Reg. Paper Fig. 5 (right side) demonstrates that the model with L-Reg can even extract facial features for the unknown person class and can generalize this to the unseen domain. Similarly, here we obtain an (informal) atomic formula as $h(F_{(\gamma=\text{has a face}, y=\text{person})},d\in D)\rightarrow\text{True}$.
We will include a more detailed discussion on this topic in the final version of our paper.
**W2.** Thank you very much for this comment. Please refer to the Reply to All Reviewers for the experimental details on the ERM baseline. As shown in PDF Tab.4, under the same experimental settings and hyperparameters, incorporating L-Reg with ERM significantly enhances the overall mDG performance, improving it from 49.9% to 52.9%.
**Q1.** We apologize for any confusion caused and appreciate your constructive feedback. L-Reg is designed to focus on eliminating shared semantics across all classes rather than addressing frequent features within a single class. To clarify this, please refer to the PDF Fig.1. In this figure, we have included fixed images with the same coordinates and additional figures illustrating feature distributions of known and unknown classes. Note here we use the values of the first components of PCA results on the original features, denoted as $v1^{st}$.
Before applying L-Reg, the $v1^{st}$ feature predominantly falls within the range [-0.4, -0.2] across all classes, especially identical between known and unknown classes. This indicates that a few specific semantics overly influence many features. L-Reg mitigates this issue by focusing the model on disentangled minimal semantic supports for classifying each class, thereby reducing feature complexity and enhancing generalization.
**Q2.** Many thanks for this comment.
The code for L-Reg is available in the supplementary materials we have provided, and we will release all code and hyperparameters to facilitate the reproduction of our experiments.
Regarding sensitivity, as discussed in Paper Line 344-358 and illustrated in Paper Tab.4, applying L-Reg to the semantic features from the deep layers improves performance for unknown classes without negatively impacting known classes.
We hypothesize this is due to the fact that our L-Reg is derived based on the precondition that $z^i, z^j \in z, I \neq j$ is independent of each other. This condition holds for most deep-layer features but may not apply to shallow layers, and further regularizing the independence may lead to further improvements.
To test this hypothesis, we conducted experiments with orthogonality regularization (Ortho-Reg) to enforce feature independence in mDG and GCD tasks. As shown in PDF Tab.2,3&4, while using Ortho-Reg alone may not be very effective, combining L-Reg with Ortho-Reg leads to further improvements.
These findings support our hypothesis and suggest that L-Reg, particularly when applied to deep layers or in conjunction with Ortho-Reg, is beneficial.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for your detailed reponse and additional experimental results! Most of my concerns are addressed, and I'm willing to raise my rating to 6.
---
Reply to Comment 1.1.1:
Title: Thanks for response
Comment: Thank you for your kind feedback. We will ensure that the relevant discussions are thoroughly incorporated into the final manuscript as suggested. | Summary: This work introduces a sample-based regularization technique, L-Reg, which goes beyond techniques like parameter-based L2 regularization by being more interpretable and demonstrating better generalization ability. The work formalizes the notion of semantic support to force the model to learn minimal sufficient statistics, quantitatively and qualitatively showing how that it leads to better generalization across multiple settings - multi-domain generalization, generalized category discovery and a new setting that is a combination of the two which they introduce.
Strengths: + A theoretically grounded paper with comprehensive experiments and results.
+ The paper is well written in general.
+ I especially liked how the algebraic logic formalism was neatly tied into this space. The idea of using semantic supports, although simple, is motivated and formalized well. I also liked how the negation of the semantic support set was used to formulate the optimization problem.
+ The derivation of conditions required to hold under various settings is well done and makes the derivation of the regularization easy to follow. The proposed mDG + GCD setting is interesting.
Weaknesses: * Although the method introduced is technically sound, with the baselines being quite comprehensive, the improvement in results seems minor. This suggests that accuracy may not be the right metric to compare against here. Considering the objectives proposed, shouldn’t other metrics beyond accuracy be considered?
* The paper states that “the semantics generated by the encoder and classifier can be combined to form atomic formulas”. I expected to see some of the actual learnt atomic formulae in the results - which I did not.
* From the formulation of the L_reg loss, it appears that any concept-based model that is sparse may be similar to the proposed formulation. How is the proposed method different from such methods -- formally and empirically?
Technical Quality: 3
Clarity: 3
Questions for Authors: * Since the paper mentions “constructing atomic formulae”, would it be possible to actually extract and see how these look from the model? This might significantly strengthen the paper.
* How does this method compare, formally and empirically, with sparse concept-based models? Wouldn’t their loss be very similar to the proposed L_reg loss?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, limitations have been addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your insightful comments, and we address your weaknesses and questions point-by-point. Some points of weaknesses and questions are combined because they are very associated with each other.
**W1.** We really appreciate this insightful comment. In our study, we adopted the commonly used accuracy metric to align with the previous work we compared against. We agree that other entropy-based metrics could potentially provide a more effective evaluation. Additionally, it may be worthwhile to propose novel logic-based metrics for a more comprehensive assessment. We find this to be a very attractive and promising area of research, and we are eager to explore it further in future work.
**W2&Q1.** We appreciate these comments. For a more detailed explanation of L-Reg's interpretability, please refer to Reply to All Reviews 1. Notably, the CAM visualizations in our paper illustrate the model's learned atomic formulas. By using L-Reg, the model derives interpretable atomic formulas, which can also be understood as the most important features for predicting a given class. To address your questions more concretely, we use Paper Fig.5 as a typical example to analyze L-Reg’s interpretability further.
For the known classes, the efficacy of L-Reg can be intuitively understood as extracting the minimal semantic supports for a given class label. As examples shown here, the presence of a guitar's fingerboard, even in unseen domains, helps classify a sample as belonging to the guitar category, whose informal form can be denoted as $h(F_{(\gamma=\text{has the fingerboard}, y=\text{guitar})},d\in D)\rightarrow\text{True}$. For all known classes, samples with these minimal semantic supports are recognized accordingly.
In contrast, if a sample lacks these minimal supports for any known class, it is very likely categorized as an unknown class. This behavior stems from Paper Eq.10 which ensures $\mathcal{A}^{y_i*} \neq \mathcal{A}^{y_j*}$ through constraining $\gamma^{y_i} \neq \gamma^{y_j}$.
L-Reg further enhances the model's ability to identify minimal supports for unknown classes by filtering out co-covariant features associated with other classes and thus generalizing to unseen domains.
Therefore, the very interpretable features for unknown classes from unseen domains can be extracted using L-Reg. Paper Fig. 5 (right side) demonstrates that the model with L-Reg can even extract facial features for the unknown person class and can generalize this to the unseen domain. Similarly, here we obtain an (informal) atomic formula as $h(F_{(\gamma=\text{has a face}, y=\text{person})},d\in D)\rightarrow\text{True}$.
We will include a more detailed discussion on this topic in the final version of our paper.
**W3&Q2.** We really appreciate this inspiring comment. Following the Reply to All part 1, while a common sparse concept model may be able to achieve $\gamma^y\psi = z^y$ by filtering irrelevant features through the sparsity, it may not ensure $\gamma^{y_i} \neq \gamma^{y_j}$, which is crucial for disentangling features used for predicting different classes. This limitation can potentially lead to degradation in generalization performance for common sparse concept models.
To investigate this fairly, we re-implemented the Bernoulli Sample of the latent features from the Sparse Linear Concept Discovery Models [3] on the same PIM backbone that we used to achieve the sparsity. The results in PDF Tab.2&3 indicate that while L-Reg consistently achieves overall improvement,
the sparse concept-based approach does not consistently improve generalization, validating the aforementioned difference.
---
Rebuttal Comment 1.1:
Title: Response to author rebuttal
Comment: I thank the authors for the detailed rebuttal, and the efforts.
* The additional results, esp the comparison with sparse concept models, is useful. I'd suggest that this should be included in the main result tables. Since one of the significant claims of this work is the reduction in complexity of parameters, comparisons with sparse models would be necessary to show the usefulness of this approach.
* Thank you for the qualitative example and explanation on the atomic formulas. It would be great to include a few qualitative results (positive and perhaps even cases where the method failed) in the appendix of the paper. This would greatly help understand the paper better.
The paper is meritorious, and I stay with my rating of WA.
---
Reply to Comment 1.1.1:
Title: Thanks for response
Comment: We sincerely appreciate your acknowledgment of our efforts and your constructive suggestions. We will update the comparison results with sparse concept models in the main result tables and add more qualitative results in the appendix.
Thank you once again for your valuable feedback, which has significantly contributed to the improvement of our paper. | Summary: The paper proposes a novel logical regularization termed L-Reg for visual classification. L-Reg encourages models to focus on the salient semantics and thereby emerges interpretability. The theoretical analysis provides clear connections between logical reasoning and L-Reg. Extensive experiments demonstrate that L-Reg also benefits the generalization of models to unseen domains and categories.
Strengths: 1. Studies on loss regularization have positive influences on various fields.
2. The paper is well-presented, and L-Reg is clearly presented with rigorous theoretical analysis.
3. The qualitative benefits of L-Reg are validated through experiments, and the generalization brought by L-Reg has been demonstrated sufficiently under three settings.
Weaknesses: 1. As stated in the introduction section, interpretability is a longstanding focus among studies in regularization terms. The authors claim that L2 regularization might lead to ambiguous interpretability, which also serves as an important motivation and contribution for L-Reg. However, the interpretability of L-Reg has not been adequately discussed except for the introduction section. Further analysis from either qualitative or quantitative perspectives can greatly strengthen the presentation.
2. The analysis of Figure 5 should be detailed, especially the examples of unknown classes in Row 3.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The author have discussed both the limitations and potential societal impact. The limitations are left for future research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: These insightful comments are highly appreciated. We believe these two questions are very related; therefore, please allow us to address them together.
As discussed in Reply to All Reviews 1, the L-Reg's interpretability is rooted in learning the good general atomic formulas. Specifically, L-Reg encourages the model to identify the minimal semantic supports - the most important features - necessary for class recognition. Such an approach resembles humans' cognition process. Paper Fig.1,5 and visualizations included in the Paper appendix show that L-Reg enables the model to learn distinctive features such as facial features for recognizing the person class, the long-neck feature for the giraffe class, and so on.
To address your questions more concretely, we use Paper Fig.5 as a typical example to analyze L-Reg’s interpretability further.
For the known classes, the efficacy of L-Reg can be intuitively understood as extracting the minimal semantic supports for a given class label. As examples shown here, the presence of a guitar's fingerboard, even in unseen domains, helps classify a sample as belonging to the guitar category, whose informal form can be denoted as $h(F_{(\gamma=\text{has the fingerboard},y=\text{guitar})},d\in D)\rightarrow\text{True}$. For all known classes, samples with these minimal semantic supports are recognized accordingly.
In contrast, if a sample lacks these minimal supports for any known class, it is very likely categorized as an unknown class. This behavior stems from Paper Eq.10 which ensures $\mathcal{A}^{y_i*} \neq \mathcal{A}^{y_j*}$ through constraining $\gamma^{y_i} \neq \gamma^{y_j}$. L-Reg further enhances the model's ability to identify minimal supports for unknown classes by filtering out co-covariant features associated with other classes and thus generalizing to unseen domains. Therefore, the very interpretable features for unknown classes from unseen domains can be extracted using L-Reg. Paper Fig.5 (right side) demonstrates that the model with L-Reg can even extract facial features for the unknown person class and can generalize this to the unseen domain. Similarly, here we obtain an (informal) atomic formula as $h(F_{(\gamma=\text{has a human face},y=\text{person})},d\in D)\rightarrow\text{True}$.
However, as shown in Row 3, significant domain shifts, such as those between the sketch domain and other domains, pose challenges. Specifically, the differences between the stick-figure style of sketches of persons and figures from other domains can hinder the model's ability to cluster sketches with other domains' figures when the class label is unknown. Thus, under this circumstance, the model may fail to extract meaningful features from those sketches. We acknowledge this limitation and will explore solutions in future work.
Once again, we appreciate your thoughtful feedback. We will incorporate this analysis into the final version of our paper.
---
Rebuttal 2:
Comment: The rebuttal has well addressed my previous concerns about L_Reg and Fig.5, so I change my rating as WA.
---
Rebuttal Comment 2.1:
Title: Thanks for response
Comment: Thank you so much for your kind response. We greatly appreciate your constructive comments. | Rebuttal 1:
Rebuttal: **Reply to All**
We sincerely appreciate the reviewers' insightful comments, which have helped us refine and improve our paper. We have identified common concerns across the reviewers and address them collectively here. Detailed responses to individual reviewers are provided separately. Please note: References to contents from the paper are denoted with the prefix 'Paper'; 'PDF' indicates that the content is included in the uploaded PDF file.
**1. More about atomic formulas and interpretability of L-Reg**
Thank you for your comments regarding atomic formulas, which have guided us in highlighting the significance of L-Reg more effectively. The atomic formula $\mathcal{A}^y$ is of the form $h(f(g(x)),y,d)$ or $h(f(z, y),d)$. Our aim is to find the good (most) general $\mathcal{A}^{y*} \in \mathcal{A}^y$ for $y$ class from which the interpretability of L-Reg is derived.
Consider $\mathcal{A}^y_1, \mathcal{A}^y_2 \in \mathcal{A}^y$, if $\mathcal{A}^y_1$ is more general than $\mathcal{A}^y_2$, there will be a substitution $\psi$ such that $\mathcal{A}^y_1\psi=\mathcal{A}^y_2$ [1,4]. $\mathcal{A}^{y*}$ should meet $\mathcal{A}^{y*}\psi=\mathcal{A}^y_i\in \mathcal{A}^y$, which infers that $\gamma^y\psi = z^y$ (cf. Paper Eq.9) for predication of $y$ where $\gamma^y$ is the **semantic support** (cf. Paper Def.3.1). Note here that the form of $\mathcal{A}^y$ is constructed for $y\in Y$, i.e., predicate whether the sample belongs to the $y$ class. Considering multiple classes $y_i,y_j\in Y, i\neq j$, it has $\mathcal{A}^{y_i*}\neq \mathcal{A}^{y_j*}$ thus $\gamma^{y_i} \neq \gamma^{y_j}$ (cf. Paper Eq.10), which constrains that different minimal semantic supports should be used for predicting different classes.
The interpretability of L-Reg is based on $\mathcal{A}^{y*}$, compelling the model to use distinct minimal semantic supports for each class. These minimal semantic supports can be interpreted as the most critical features for efficient prediction. For example, as shown in Paper Fig.1, the model with L-Reg has learned the facial features of the person class (see more examples in Paper Supp Fig.7-12), forming the (informal) atomic formula $h(F_{(\gamma=\text{has a human face},y=\text{person})},d\in D)\rightarrow\text{True}$.
**2. Improvements with L-Reg**
**Improvement highlights of L-Reg.** We understand the points from the reviewers. Nonetheless, we humbly believe L-Reg actually leads to consistent and evident gains. Paper Tab 1-2 show the consistent overall improvements brought by L-Reg across different datasets in mDG and GCD, suggesting the feasibility of L-Reg. For GCD, **a 6.7% improvement on the unknown classes** of CIFAR100 and an average of 2.8\% across unknown classes and all datasets also addresses L-Reg's efficacy for generalization. In mDG, the TerraInc dataset includes challenging camera trap images even for humans; L-Reg achieves **a significant 2.2% increase** on it and an average of 0.7\% across five datasets.
**Apply L-Reg to ERM Baseline for mDG.** To further validate L-Reg's efficacy, we use ERM as the baseline on the TerraInc dataset for mDG. For a fair comparison, all experiments share the same hyperparameter settings and use the Regnety-16gf backbone. Original ERM results are also included alongside our reproduced results. The results in PDF Tab.4 reveal that ERM with L-Reg significantly improves mDG performance (**from 49.9% to 52.9%**).
**Apply L-Reg to congestion prediction for circuit design.** We also test L-Reg in Congestion prediction for circuit design on the CircuitNet [2] dataset by using the CircuitFormer [5] backbone. All parameters, except for L-Reg, remain consistent with CircuitFormer, and we follow its metrics. Results in PDF Tab.1 show improvements with L-Reg across all metrics and a significant increase in the pearson metric (**0.6374 to 0.6553**).
**Compare L-Reg with more regularization terms.** We compare L-Reg with other regularization terms: Ortho-Reg: the orthogonality regularization that constrains the independence of each dimension of the semantic feature $z$; and Sparsity: implemented as Bernoulli Sample of the latent features from the sparse linear concept discovery models [3] on our used PIM backbone. PDF Tab.2&3 demonstrate that L-Reg outperforms Ortho-Reg and Sparsity.
**Limitation of L-Reg and possible solutions.** As discussed in the Paper Limitation part and analysis around Paper Line 344-358, L-Reg is based on the precondition that each dimension of the $z\in Z$ represents an independent semantic. Thus, improper $z$ that does not meet this precondition may lead to sub-optimal results. To validate this hypothesis, we test L-Reg by reinforcing independence with Ortho-Reg. Results of MDG in PDF Tab.4 and GCD in Tab.2&3 show that combining L-Reg with Ortho-Reg leads to further improvements, whereas Ortho-Reg alone may not guarantee improvements. This suggests a direction for future work.
In summary, we believe the consistent improvements across all these experiments under different settings, with various baselines and backbones, demonstrate the excellent efficacy of L-Reg. These additional analyses and experiments will be included in the final version.
*References:*
[1] H. Andréka, I. Németi, and I. Sain. Universal algebraic logic. Studies in Logic, Springer, 2017.
[2] Z. Chai, Y. Zhao, W. Liu, Y. Lin, R. Wang, and R. Huang. Circuitnet: An open-source dataset for machine
learning in vlsi cad applications with improved domain-specific evaluation metric and learning strategies.
IEEE TCAD, 42(12):5034–5047, 2023.
[3] K. P. Panousis, D. Ienco, and D. Marcos. Sparse linear concept discovery models. In ICCV, pages 2767–2771, 2023.
[4] I. Tsapara and G. Turán. Learning atomic formulas with prescribed properties. In Proceedings of the
eleventh annual conference on Computational learning theory, pages 166–174, 1998.
[5] J. Zou, X. Wang, J. Guo, W. Liu, Q. Zhang, and C. Huang. Circuit as set of points. NeurIPS, 36, 2024.
Pdf: /pdf/fc0e7994a9c67815f0d9376d596f58c72d470acd.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper mainly focuses on two problems: 1) How does logical reasoning relate to visual tasks such as image classification? 2) How can we derive a logical reasoning-based regularization term to benefit generalization?. Then, this paper proposes a method called Logical Reasoning Regularization based on the analysis of the two problems. Theoretical analysis and experimental results demonstrate that L-Reg enhances generalization across several scenarios.
Strengths: 1. The main contributions of this article are: 1) Building the relationship between logical reasoning and visual tasks such as image classification; 2) Rethinking the classification task from the logical reasoning perspective and proposing Logical Reasoning Regularization. Overall, the contributions are meaningful, and the paper is interesting.
2. The paper is easy to read.
Weaknesses: 1. As can be seen from Table 1 and Table 2, the proposed regular term has a weak improvement on the existing method.
2. The analysis of this paper is incomplete and lacks theoretical analysis of sufficient conditions or sufficient and necessary conditions for meeting Atomic Formulas.
3. Overclaim. This paper claims that the proposed L-Reg can reduce the complexity reduction. However, the proposed L-Reg is regarded as a regular term and is directly added in the learning objective. This does not reduce the computational complexity of the training period, but the extra regular term increases the computational overhead of the whole training process.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. line 123, f_s should be f.
2. In Figure 1, it can be seen from the visualization results of the first line that the learned features are basically concentrated on the background, and it is felt that the obtained model is overfitting or underfitting. I would like to know the specific parameter Settings, experimental codes, random seeds, etc., of the experiment.
3. In Figure 3, what does the abscissa represent? What is the baseline being compared? To make the results more convincing, an additional baseline and dataset need to be added.
4. In Figure 4, the coordinate proportions of the left subgraph and the right subgraph are inconsistent, so it is unfair to make a direct comparison. Secondly, the horizontal and vertical coordinates need to be explained what they represent. Finally, the left and right subgraphs are almost identical, It cannot be proved that +L-Reg can achieve real elimination of certain extracted semantics characterized by dominant frequencies across all samples.
5. In lines 173-186, Why the constraint in eq. (6) is removed in eq. (8), please give a detailed derivation process.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your insightful comments and we address your weaknesses and questions point-by-point.
**W1.** We humbly believe L-Reg delivers consistent and evident gains. Please refer to Reply to All for the highlighted improvements. **Moreover, as you suggested, we further validate L-Reg's efficacy by applying it to the ERM for the mDG and CircuitFromer for additional Congestion prediction. PDF Tab.4 exhibits that L-Reg improves averaged performance with the ERM baseline from 49.9% to 52.9% and CircuitFormer from 0.6374 to 0.6553 in the Pearson metric, besides increases in other metrics.**
**W2.** The sufficient and necessary conditions for achieving atomic formulas are the ultimate goal that remains a problem for the community. While we are working towards this, this paper proposes a first practical approach to approximate the most general atomic formulas. Please refer to the Reply to All, which provides more analysis about how the constraints in the paper are derived to achieve atomic formulas. Furthermore, according to the current limitation of L-Reg the paper discussed, we offer a future direction of obtaining proper $z$ to meet L-Reg's precondition. Additional experiments of ERM on mDG and GCD (cf. PDF Tab. 2,3&4) show that constraining proper $z$ leads to further improvement. We will discuss this grand topic in our revision and explore more aspects in the future.
**W3.** Sorry for any confusion. L-Reg is not designed to reduce the computational complexity but aims to reduce the complexity of the model parameters and the data features. As discussed in Paper Sec.3, L-Reg reduces the classifier's complexity by increasing the sparsity of its weights and features' complexity by removing over-dominated semantics across all classes (this comment is related to your Q3\&4, where details of how L-Reg achieves these can be found). We will polish this part in the final version for better clarity.
**Q1.** Thanks. We will fix all these typos.
**Q2.** Paper Fig.1 shows CAM visualizations of the models trained under the GMDG with the RegNetY-16G backbone for mDG+GCD on the PACS with the unseen domain art painting, using DomainBed's protocols and codes. Both models share the same training parameters and seed 0. The only difference between them is that the latter uses L-Reg with its weight as an extra hyperparameter. Details such as specific parameter settings, as you requested, are in Paper Appendix E.1 and codes for our method are included in the supplementary materials. In short, we believe the comparison should be fair because of the same hyperparameters and the seed being used for both models except L-Reg. Note that the same models are used for Paper Fig.3&4, and all CAM visualizations in the Paper appendix.
**Q3.** Using the aforementioned models, Paper Fig.3(a) presents the heatmap of the classifiers' weights, where the x-axis is the index of the weights in the linear layers. Paper Fig.3(b) shows the distributions of the classifiers' weight values, with the x-axis representing the value of the normalized weights and the y-axis showing the count of weight values in bin intervals. Following your suggestion, the fixed figure with a denoted abscissa and more descriptive captions is shown in PDF Fig.2. PDF Fig.2 demonstrates that L-Reg reduces the classifier's complexity by alleviating extreme weight values. The heatmap further indicates that L-Reg increases the sparsity of the classifiers, thus better generalization. Please refer to the reply of **W1** for details of more experimental results.
**Q4.** We apologize for any confusion caused by Paper Fig.4. We have re-drawn Paper Fig.4, now shown as PDF Fig.1, with distributions illustrated on the same coordinates and added features of known and unknown classes. Using the aforementioned models, PDF Fig.1 shows feature distributions based on the first component values after PCA (denoted as $v1^{st}$). Before using L-Reg, $v1^{st}$ mostly concentrates between [-0.4,-0.2] across all classes, indicating some specific semantics over-dominates the features. L-Reg alleviates this issue by forcing the model to obtain each class's minimal semantic supports, removing shared semantics across all classes to reduce feature complexity. PDF Fig.1 top row indicates the distance between the feature distributions of known and unknown classes is enlarged with L-Reg, making them more dividable for classification.
**Q5.** We provide more details about why $\vdash_{(h \circ g(X), Y)}=\models_{(g(X_s),Y_s)}$ in Paper Eq.6 can be safely omitted in the rest of the paper after the definition of logical framework. Consider the logic $L_{(X_s,Y_s)}=<F_{(X_s,Y_s)},D,\models_{(X_s,Y_s)},h,\vdash_{(h(X),Y)}>$, we want to study $\vdash$'s logic that is defined in the form of
$L_{\vdash}\stackrel{\text{def}}{=}<F_{X_s,Y_s},D_{\vdash},h_{\vdash},\models_{\vdash}>$, where $D_{\vdash},h_{\vdash},\models_{\vdash}$ are pseudo-components associated with $\vdash$. Particularly, $D_{\vdash}$ is a subset of all possible world/domains from $F_{(X_s,Y_s)}:D_{\vdash}\stackrel{\text{def}}{=}\{T\subseteq F_{(X_s,Y_s)}:T\text{ is closed under }\vdash_{(h(X),Y)}\}$. For any $T\in D_{\vdash}$ and $a\in F_{(X_s,Y_s)}$, it has $h_{\vdash}(a,T)\stackrel{\text{def}}{=}\{b\in F:T\vdash(a\leftrightarrow b)\}$. Further, $\models_{\vdash}$ in $T\in D_{\vdash}$ is defined as $T\models_{\vdash}a\stackrel{\text{def}}{\Leftrightarrow}a\in T$. [1] points out that the following condition is almost always satisfied: (Cond) $\forall a, b\in F_{\vdash},d\in D_{\vdash}$, we have $(h_{\vdash}(a,d)=h_{\vdash}(b, d))\text{ and }d\models_{\vdash}a\Rightarrow d\models_{\vdash}b$. Hence, the semantical consequence relation induced by $\models_{\vdash}$ coincides with the original syntactical $\vdash_{(h\circ g(X),Y)}$ while Cond holds. Due to $D_{\vdash}\subseteq D$, $\models_{(g(X_s),Y_s)}$ coincides with $\models_{\vdash}$. Thus, $\vdash_{(h\circ g(X),Y)}=\models_{(g(X_s),Y_s)}$ can be safely omitted.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for your detailed response. My problems have been addressed well. I raise my rating to 6: Weak Accept.
---
Reply to Comment 1.1.1:
Title: Thanks for response
Comment: We greatly appreciate your kind response and insightful comments, which have significantly helped us improve our paper. | null | null | null | null | null | null |
Conjugate Bayesian Two-step Change Point Detection for Hawkes Process | Accept (poster) | Summary: The paper proposes a new Bayesian inference method for change point detection in Hawkes Processes using conditionally conjugate priors and Gibbs Sampling. In their experiments, their new method turns out to perform better than competing methods both in terms of accuracy and speed.
Strengths: The authors do a good job in motivating the relevance of the model and the need for improved (Bayesian) inference methods. The benchmarks appear to be sound and show clear benefits of the new method compared to alternatives. Obtaining conditionally conjugate priors for such a model is not an easy task and I think the authors do a good job in communicating their theoretical results both in the main text and in the appendix (where they provide the proofs).
Weaknesses: - The synthetic data experiments seem to be relatively limited, showing only the results of a single process with 2 change points. I think that a bigger simulation study with more variation in different aspects (beyond what the stress test study did) would have strengthend the paper.
- The stress test in the ablation study seems to vary the difficulty of the problem, which makes sense. However, the authors only show the results of their own method there, although I see no reason to not also show the results of the other methods for these cases.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In the synthetic datasets experiments, I wonder whether the data simulation benefits your method over the other methods? Asked differently are your and the other methods only differing in the inference algorithm applied or also in the specific model they assume? In the latter case, simulating data from the same model that is assumed by your method (but not by others?) could potentially bias the results. I would like to understand this point better.
- In terms of innovations of the conditional conjugate approach, what are the concrete innovations over the approach proposed in [30] as cited in your paper?
- You only investigate the use of 1 to 3 basis functions. As someone experienced with splines, this seems relatively little. Can you elaborate why you only need few basis functions in most real world scenarios?
- Is your method implemented somewhere in a user friendly manner such that people can readily apply it to their own data?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As a non expert for these models, I wonder if your new method limitations also for univariate Hawkes Processes? You only mention limitations for potential multivariate extentions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q: The stress test in the ablation study seems to vary the difficulty of the problem, which makes sense. However, the authors only show the results of their own method there, although I see no reason to not also show the results of the other methods for these cases.
A: Thank you for your valuable suggestion. Due to page limit, we only presented the results of the stress tests for our own method in the paper. In the rebuttal PDF, we have included results for other baseline methods as well. It is evident that our method still shows significant advantages. Please see Table 1, 2 and 3 in the rebuttal PDF. Thank you once again for your suggestion and we will add these new results to the camera ready.
> Q: In the synthetic datasets experiments, I wonder whether ...... could potentially bias the results. I would like to understand this point better.
A: Thank you for your asking. The data simulation method could indeed favor our model over others because our model assumes a nonlinear Hawkes process, while the SMCPD and SVCPD models assume a linear Hawkes process. To address this potential bias, we modified the SVCPD model to also assume a nonlinear Hawkes process, resulting in the SVCPD+Inhi model. Our model still outperformed all these models, demonstrating the effectiveness and robustness of our approach. Thank you for highlighting this important point.
> Q: In terms of innovations of the conditional conjugate approach, what are the concrete innovations over the approach proposed in [30] as cited in your paper?
A: [30] used conditional conjugate methods for parameter estimation without change points, but we made an extension to this method, using conditional conjugate methods for change point detection. This is the main difference between our work and [30].
> Q: You only investigate the use of 1 to 3 basis functions. As someone experienced with splines, this seems relatively little. Can you elaborate why you only need few basis functions in most real world scenarios?
A: We verified this through experiments. In fact, we tried 1-5 basis functions. However, due to page limits, we only showed the results for 1-3. We found that having too many basis functions might lead to overfitting, so using fewer basis functions might be more effective.
> Q: Is your method implemented somewhere in a user friendly manner such that people can readily apply it to their own data?
A: The code can be run with a single Python file, and anyone can directly call this interface. Once everything is finalized, our code will be made publicly available on GitHub, making it easy for everyone to use on their own data. Thank you for your interest.
> Q: As a non expert for these models, I wonder if your new method limitations also for univariate Hawkes Processes? You only mention limitations for potential multivariate extentions.
A: For univaraite Hawkes process, these limitations do not exist.
---
Rebuttal Comment 1.1:
Comment: Thank you. I will keep my (positive) score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback on our work. We truly appreciate your recognition and support! | Summary: The paper aims to detect change points (in terms of model parameters) in point processes and proposes a conjugate Bayesian two-step change point detection method for Hawkes processes. This is achieved by applying data augmentation and a novel Gibbs sampler for closed-form updates for model parameters. For both synthetic and real-world data, the proposed method demonstrates superior performance in accuracy and efficiency compared to existing baselines.
Strengths: * The paper is very well-written in its structure, notation, explanation, and discussion. This work is put in the context of literature and is self-contained. All symbols are defined before using them.
* The main paper includes all important pieces with great clarity and necessary details while and concise at the same time.
* Experimental results demonstrate significant improvements from baselines (especially on real-world datasets). The settings are clearly described for readers to interpret the results.
Weaknesses: The results for synthetic data are not as strong as those for real-world data, please see my question below.
Technical Quality: 4
Clarity: 4
Questions for Authors: Could you help explain the statistical significance of results for synthetic data, e.g., how many runs to average the results for the std.? It seems to me that 1 std. is relatively large compared to the average, and there are large overlaps of 1 std. between different models.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Limitations are discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q: Could you help explain the statistical significance of results for synthetic data, e.g., how many runs to average the results for the std.? It seems to me that 1 std. is relatively large compared to the average, and there are large overlaps of 1 std. between different models.
A: Thanks for your question; this is a very good point. We averaged the results over 5 runs to obtain the final results. The randomness in our experiments comes from the sampling of model parameters from the corresponding parameter posterior and the sampling of the next point given the sampled parameters. The interval of our model takes all this randomness into account, which is why the experimental variance is a bit large.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I'll keep my positive score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback on our work. We truly appreciate your recognition and support! | Summary: This paper proposes a conjugate Bayesian two-step change point detection method for the Hawkes process using data augmentation. It addresses the computational inefficiency of existing methods by providing analytical expressions. The new method proves to be more accurate and efficient, as demonstrated by extensive experiments on both synthetic and real data.
Strengths: This paper proposes a novel method that ensures the posterior distribution of the bounded historical period Hawkes process is conjugate. Innovatively, it employs Pólya-Gamma variables and marked Poisson processes. This innovative approach is highly useful, extending beyond change point detection to potentially a wide range of applications for the Hawkes model.
Empirical results show the proposed method outperforms counterparts.
Weaknesses: The implementation details have not been adequately discussed and presented. While some figures and details are provided in the appendix, I suggest moving the important ones to the main content and discussing them in detail. Additionally, I recommend including more background information on Pólya-Gamma variables and marked Poisson processes, as these concepts may not be familiar to readers outside this domain.
Technical Quality: 3
Clarity: 2
Questions for Authors: Gibbs sampling is still used; is that computationally expensive? Could you please discuss the computational complexity of your algorithm?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes, discussed in a separate section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q: The implementation details have not been adequately discussed ...... as these concepts may not be familiar to readers outside this domain.
A: Thank you for your suggestion. Due to page limit, some content had to be placed in the appendix. However, we appreciate your feedback and will consider making a more reasonable adjustment to the content placement in the camera-ready. Additionally, we will include more background information on Polya-Gamma variables and marked Poisson processes to aid readers who may not be familiar with these concepts.
> Q: Gibbs sampling is still used; is that computationally expensive?
A: Thanks to the data augmentation technique, the data augmented Gibbs sampler has completely analytical expressions, making its use not expensive.
> Q: Could you please discuss the computational complexity of your algorithm?
A: Indeed, we have already done this. The discussion on computational complexity is provided in detail in Section 3.3.5, ‘Algorithm, Hyperparameters, and Complexity,’ on page 6, line 215 of the main content.
---
Rebuttal Comment 1.1:
Title: Question about Gibbs sampling
Comment: Thanks for the response. Although augmented Gibbs has analytical expressions, do they need to iterative sampling one variable conditional on other variable? Does it still need to repeat the iterative samplings/calculations for multiple times till convergence? Does it have to be sequentially computed?
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply. Your understanding is right. Gibbs need to iterative sampling one variable conditional on other variable. And it need to repeat the iterative samplings/calculations for multiple times till convergence. | Summary: This paper considers the Bayesian two-step change point detection model for Hawkes process. Through data augmentation techniques such as the use of Polya-Gamma random variables and the marked Poisson process, conditional conjugacy is achieved and an efficient Gibbs sampler can be designed for posterior computation in the first step. Extensive numerical experiments are provided on both synthetic and real data.
Strengths: 1. The paper is well-written and organized.
2. Sufficient background is provided, and clear motivation is discussed.
3. The targeted problem, i.e. posterior computation for Bayesian change point detection problem, is an interesting, important, and challenging problem.
4. The proposed method is concise, and very applicable in real problems.
3. Extensive numerical experiments and analyses are conducted.
Weaknesses: The method is only Bayesian in the first step, i.e. the step of obtaining the posterior predictive distribution of the next event occurrence t_{m + 1}. The determination of change point is purely based on whether the next event occurrence lies with in the posterior predictive credible interval, which no longer takes into account of the uncertainty at this step. It would be better and more intuitive if a jointly Bayesian model could be used - of course, this is also more computationally challenging.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What are the benefits of using beta density as the basis functions for \phi, compared to using say functions based on differently scaled RBF or Matern kernels?
2. Just like the sigmoid function can be polya-gamma augmented, the probit function also has its own data augmentation techniques. Can some similar Gibbs sampler be obtained if we use the probit function as \sigma?
3. How well is the mixing of the data augmented Gibbs sampler?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q: The method is only Bayesian in the first step, ...... more computationally challenging.
A: Thank you for your valuable suggestion. As you mentioned, we are indeed only using Bayesian in the first step at the moment. In future work, we will consider using a complete Bayesian approach.
> Q: What are the benefits of using beta density as the basis functions for $\phi$, compared to using say functions based on differently scaled RBF or Matern kernels?
A: This is a good question. We experimented with beta density as the basis functions for $\phi$ because we followed the convention in the previous work [30]. As stated in [30], ``Although basis functions can be in any form, to make the weights indicative of functional connection strength, basis functions are chosen to be probability densities with support $[0, \infty]$". Furthermore, they restricted the support to $[0, T_\phi]$ to accelerate the computation of ${\Phi}(t)$. Therefore, beta density is a natural choice.
> Q: Just like the sigmoid function can be polya-gamma augmented, the probit function also has its own data augmentation techniques. Can some similar Gibbs sampler be obtained if we use the probit function as $\sigma$?
A: This is an interesting question. The probit method also has its own data augmentation techniques. Theoretically, a similar Gibbs sampler can also be obtained if we use the probit function to replace $\sigma$. However, we have not tried this method.
> Q: How well is the mixing of the data augmented Gibbs sampler?
A: Thank you for your question. In our experiments, we set the burn-in to 90, considering the samples starting from the 91st as samples from the stationary distribution. We found that the mixing of the data augmented Gibbs sampler performed very well.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the detailed reply. My concerns and questions have been partly addressed. I am keeping my rating as is.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback on our work. We truly appreciate your recognition and support! | Rebuttal 1:
Rebuttal: We thank all reviewers for their efforts in providing insightful comments and constructive feedback.
We are pleased that the reviewers have recognized the significance of our paper in solving an interesting change point detection problem in Hawkes process [R1, R2, R3, R4], conducting comprehensive numerical experiments [R1, R2, R3, R4], and maintaining clear and concise writing [R1, R2, R3].
In the following, we address reviewers’ comments point by point.
Pdf: /pdf/0f189651f88dcbf4f706ed7baa8a14a603da2dea.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Gliding over the Pareto Front with Uniform Designs | Accept (poster) | Summary: This paper attempts to address a challenging problem in the field of MOO: how to generate a uniform set of solutions on the PF. The authors try to directly characterize uniformity and propose fill distance to quantitatively measure it. Given that fill distance is difficult to optimize directly, the paper further introduces a surrogate problem and theoretically analyzes the relationship between the surrogate problem and the original problem. Finally, the paper presents an MOA, named UMOD, based on MOEA/D. UMOD models the PF through an MLP and adaptively adjusts perference weights by optimizing the surrogate problem. Empirical studies demonstrate the effectiveness of UMOD.
Strengths: This paper tackles a longstanding challenge in MOA: producing uniformly distributed solutions on the PF. Generally, this paper is well written and easy to follow.
I am really impressed by the theoretical analysis of uniformity. In the fields of MOO and evolutionary algorithms, past theoretical research has mainly focused on convergence, while theoretical analysis of distribution has long lacked progress. This paper introduces new mathematical tools for the analysis of diversity, making significant theoretical contributions to improving the uniformity of solutions. Previous MOAs typically used some mechanisms based on intuition to maintain distribution, such as crowding distance. These mechanisms lack theoretical basis and often do not perform well in practical applications. This paper is among the first to propose an MOA with theoretical guarantees for diversity, paving a new path for future research in MOO.
Weaknesses: Although the theoretical part is impressive, I am not fully convinced that UMOD is a novel and outstanding MOA.
**One of my major concerns is about the fill distance.** I acknowledge that the theoretical analysis of fill distance is very helpful. However, fill distance does not seem like a novel concept in MOO; actually, it can be regarded as a variant of IGD. The proposed fill distance is $\max_{y \in T} \min_{y^\prime\in\mathbb{Y}\_K} \rho(y,y^\prime),$ and IGD is $\operatorname{mean}\_{y \in T} \min\_{y^\prime\in\mathbb{Y}\_K} \rho(y,y^\prime)$. The difference is on that the fill distance uses $\max$ whereas IGD uses $\operatorname{mean}$. Therefore, minimizing the fill distance is generally equivalent to minimizing IGD, on which there have been many researches in the field of MOO. Moreover, a significant advantage of IGD over fill distance is that it can reflect the overall distribution of almost all non-dominated solutions, whereas fill distance only focuses on the maximal distance. This means that almost all the solutions may contribute to IGD, but only one solution contributes to fill distance. This directly leads to a drawback of the proposed UMOD algorithm: it can only update the nearest pair of weight vectors each time, while the gradients for the others are zero. This may limit the efficiency of the UMOD algorithm. This issue will be further discussed below.
**Another concern is on efficiency.** Although the authors claim that efficiency is one of the main highlights of this paper (L139), I did not find results regarding efficiency, such as resource consumption, convergence curves, etc. Some critical experimental settings are also missing; for example, it does not seem to mention how many function evaluations UMOD and these baselines used. Without this key data, I cannot be convinced that the proposed algorithm is efficient. On the contrary, for the following reasons, I think UMOD might be much more expensive than the baselines.
The first reason is that the authors does not seem to take efficiency into account when designing the optimization objective. As mentioned above, the original problem (Eq. 2) and the surrogate problem (Eq. 3) proposed are both min/max optimization problems. Therefore, in each iteration, only the nearest pair of weights will be updated, while the others, which may also be disorganized, remain unchanged. This may make the weight updates very slow. It's like moving the two closest points slightly apart each time, which may require many iterations for all the points to achieve a uniform distribution. Additionally, the theorems and optimization objectives have a strong assumption that $\mathbb{Y}_K\subset T$. This means that before the solution set converges well to the PF, the theoretical analysis does not hold, making the weight updates potentially meaningless.
The second reason is the use of neural networks. In some previous studies on expensive MOO, DL models are used because fitness evaluation often takes much more time than training a model. However, in the general MOPs discussed in this paper, to balance efficiency, it is rare for MOAs to use neural networks to simply maintain distribution. In UMOD, training the PF model requires thousands of epochs each time, and this model needs to be repeatedly trained many times. Therefore, the training process of the PF model is quite expensive. Furthermore, as mentioned earlier, this model is only effective when the solution set has already converged to the PF, which raises a question: if the solutions have already converged, why incur such a high cost to repeatedly train a neural network just to make the distribution of some solutions more uniform?
Based one these reasons and the minor ones listed below, I do not recommend accepting this paper in the current form. Since some of these problems may be easy to fix, I will increase my rating if some of concerns are adequately addressed.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. P1, Line 30. The dimension of PF can be considered as $m-1$, provided that PF is not a degenerate one. However, the dimension of PS has no relation to $m$.
2. P1, Line 36. It is “crowding distance”, not “crowded distance”. Moreover, NSGA-II uses crowding distance, whereas NSGA-III does not. This mistake also appears in P2, Line 80.
3. P3, Line 88. HV is to be maximized, while IGD is to be minimized.
4. P5, Line 163. $\partial \delta/\partial \theta^{(j^*)}$ is $-CA$ or $C(-A)$ instead of $C-A$.
5. P6, Footnote. I agree that DTLZ7 is not connected. But why do DTLZ5-6 not meet the requirements? DTLZ5-6 are both compact and connected.
6. P9, Line 279. Regarding these baselines as "SOTA" might be controversial. Of course, I don't think that a novel method must be able to outperform all past ones, but the baselines selected here are much lower than the SOTAs I've seen in recent years (with HV>3 on Adult and Compass). I think using these algorithms as baselines is ok, but please do not call them "SOTA".
7. P17, Line 586. The authors might have confused IGD and IGD+, which are two different indicators. In the caption (Line 586), the authors write IGD, on the left side of the equation they write IGD+, and on the right side of the equation it is IGD again.
8. P25, Fig. 10. The caption should be placed below the image.
9. The method proposed in this paper can be classified as a weight adaptation mechanism. Weight adaptation is a popular research direction in MOO, and there is a substantial amount of related work that shares some similarities with the focus of this paper. Therefore, I suggest that the authors include these related works in the literature review.
10. Some experimental settings are missing, e.g., the reference point for HV and the reference set for IGD. These settings are important for reproducibility.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The discussion on limitations is presented in Section D.2 of the supplementary material. The authors' discussion of the limitations is somewhat perfunctory. Many limitations are mentioned in the main text, such as the inability to solve constrained problems and the inapplicability to disconnected PFs; however, they are overlooked in the limitations section. I suggest that the authors reconsider and summarize the limitations of the proposed method, as this will help readers gain a more comprehensive understanding of the contributions of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the exhaustive feedback and, especially, for acknowledging that our work and contributions are significant to the community. Due to space limit, we will **directly take the advice on writing** and incorporate them into the next revision.
**W1. (1) Minimizing the fill distance is generally equivalent to minimizing IGD. (2) Limited efficiency of the UMOD.**
The comment mainly includes two points:
1. whether FD is new given IGD (c.f. our general response 1)
2. whether our proposed algorithm is efficient (deferred to W2).
**W2. (1) Efficiency concerns. (2) If the solutions have already converged, why just make some solutions more uniform?**
**Firstly**, we aim to address your concern with **efficiency**.
*Conceptually*, optimizing the max-min problem (Eq.7) is naturally efficient due to the following reasons:
- This optimization problem is solved by **gradient ascent**, which is more efficient than the evolutionary computations (ECs).
- The gradient calculation is **closed-form** and GPU-accelerated.
- Although only two preferences are updated, they reflect global information, eliminating the need to calculate the rest of the gradients.
- The PF model is **lightweight**, taking preference angle ($(m-1)$-D) as input and outputting objective ($m$-D).
For algorithm configurations, see our response to AxUm(Q2). In bi-objective problems with a maximum fitness value of 32,000 (4000*8, with 8 as the main population), preferences are updated every 1000 iterations, resulting in at most *three* updates or retrainings of the Pareto front model. Thus, the Pareto front model is not trained **repeatedly**.
*Empirically*, we analyze time consumption using the bi-objective ZDT1 optimization as an example. Finding the best preference configuration typically takes **500** iterations (**1s**), while training the Pareto front model requires only **10** iterations (**75ms** with GPU acceleration, see the convergence curve provided in a one-page PDF). The total runtime for UMOD is **95.49s**, with preference optimization accounting for a negligible **3.14%** and **0.24%** of the total time.
We also would like to make two remarks.
- The two nearest preferences are updated repeatedly (rather than one time) to find the optimal configuration until convergence (convergence curve cf one-page PDF).
- The max-min optimization can be accelerated by optimizing the max-softmin objective.
**Secondly**, we explain why **finding uniform Pareto objectives is necessary**.
The main purpose of this paper is to study generating **$K$ uniform Pareto solutions** that represent the entire Pareto front in a **single run**. This is meaningful because:
- Real-world applications (e.g., product releasing) often require producing only $K$ Pareto products, making the study of topK uniform solutions beneficial.
- Too many solutions can overwhelm users, especially for many-objective problems. A small number of representative solutions is helpful during the early stage.
Previous methods, such as "subset selection" (Qian et al., 2020), achieve a $1 - 1/e$ precision guarantee by selecting topK representative solutions. Table 7/8 shows UMOD outperforms subset selection. Additionally, UMOD is more efficient, requiring fewer main populations (**40** vs **10** for UMOD) and taking less time (**142.2s** vs **95.49s** for UMOD).
**Q1. However, the dimension of PS has no relation to $m$.**
The effective dimension of the Pareto set (PS) manifold in the $n$-D Euclidean space is $m-1$ (see Ye et al., 2022, p.1, line 12). For biobjective optimization, PS is a 1-D curve embedded the $n$-D space.
**Q5. Results on DTLZ5-7?**
Please refer to general response 2.
**Q6. Regarding these baselines as "SOTA" might be controversial.**
We will use the term "baseline" rather than "SOTA" in the revised paper.
**Q7. IGD and IGD+?**
It is a typo and has been fixed.
**Q9. Literature review of weight adaptation methods.**
We will add a new subsection to discuss weight adaptation method.
Specifically, we will follow the survey (Ma et al, 2020) as well as other recent progress to review the existing 6 categories of weight adaptation methods.
We remark the main **differences** between our proposed UMOD and other weight adjustment methods are:
1. the theoretical motivation to improve the uniformity of solutions.
2. the first practical neural network implementation (UMOD) to fit the Pareto front model (with neglectable training time $<100$ms).
**Q10. Some experimental settings are missing, e.g., the reference point for HV and the reference set for IGD.**
As shown in the already uploaded core source code, we simply follow the conventional settings (e.g., using the default `pareto_front()` function in pymoo to calculate the reference set for IGD). We will detail these settings in Appendix B.3.
**L1. The authors' discussion of the limitations**
We intentionally skipped the already mentioned limitations in Appendix D.2. As per your advice, we plan to reorganize D.2 as follows:
While we have illustrated UMOD's success, it is also crucial to understand the limitations thereof.
First, UMOD theoretically does not address disconnected Pareto fronts such as DTLZ7, as we note Theorems 3 and 4 both assume a connected Pareto front to ensure uniform distribution.
Second, cases containing discrete decision variables or constrained MOPs are also dismissed, as a continuous Pareto front is required for UMOD.
Finally, the neural network complicates the analysis of the optimization landscape, which is a common issue for using neural networks.
---
Reference:
1. Pareto Navigation Gradient Descent: a First-Order Algorithm for Optimization in Pareto Set. Ye et al. UAI. 2022.
2. A Survey of Weight Vector Adjustment Methods for Decomposition-Based Multiobjective Evolutionary Algorithms. Ma et al. TEVC. 2020.
3. Subset Selection by Pareto Optimization with Recombination. Qian et al. AAAI. 2020.
---
Rebuttal 2:
Comment: Thank you for your response. Some of my concerns have been adequately addressed. However, I still have some concerns about the proposed UMOD algorithm.
1. This is a minor comment, but I have some reservations about your response. You provided a paper (Ye et al., 2022, p.1, line 12) to support your claim that "the dimension of PS is $m-1$". Actually, this paper did not explicitly prove this claim, so I think the statement in that paper may also be incorrect. The vector function may be a projection from $\mathbb R^n\to \mathbb R^m, (n>m)$. In this regard, I can not find any evidence of why the dimension of PS is $m-1$ without further assumptions, because there may be multiple $x$ resulting in the same $f(x)$.
2. About UMOD "require fewer main populations". I think a good algorithm should work well for adjustable population size, so I am confused about what "require fewer main populations" means.
3. About the limits of subset selection. I think "subset selection" is a problem, not an algorithm. There are many algorithms for solving subset selection. The $1-1/e$ bound is the guarantee of naive greedy algorithms. Actually, there are many subset selection methods that could achieve better results. Therefore, I do not agree with the authors claiming that the $1-1/e$ bound is a shortcoming of subset selection. There is a recent work [1] that also focuses on uniformity design but is based on subset selection. The results presented in [1] seem comparable with, or even better than, UMOD, regarding uniformity. Moreover, [1] seems more efficient because it does not require training a network. [1] can effectively deal with disconnected PFs like DTLZ7, whereas UMOD cannot. [1] optimizes for uniformity by minimizing an energy function, which is similar to UMOD which optimizes Eq. (3). Another representative work [2] focusing on weight adaptation also uses subset selection and archiving strategies to optimize for uniformity, and achieves very promising results in various PFs. I am not asking for new results, but to my knowledge and based on the results reported in this paper, UMOD does not seem to perform better than the real SOTA baselines regarding uniformity.
4. The assumptions $y\in T$ in the theory part is very ideal. Moreover, the PF model can not learn the PF when the solutions are not well converged. In such situations, is the PF model meaningless?
5. The FE budget seems very high. The maximum fitness evaluations are 32,000 for bi-objective, 126,000 for three-objective, and 350,000 for four-objective optimization. It makes me believe that UMOD needs much more budget compared with the baselines.
References
[1] Enhancing Diversity by Local Subset Selection in Evolutionary Multiobjective Optimization. IEEE TEC.
[2] What Weights Work for You? Adapting Weights for Any Pareto Front Shape in Decomposition-based Evolutionary Multi-Objective Optimisation. Evolutionary Computation.
---
Rebuttal 3:
Title: Author's response
Comment: We deeply appreciate your prompt response. We hope that our following answers can address your concerns more or less.
> **Q1.** Why the PS dimension is $m-1$?
This means that the effective dimension of the Pareto set is $m-1$. This conclusion can be found in Theorem 2.2, Equation (6), on page 5, line 18 of Hillermeier et al. (2001). For completeness, we provide a rough analysis here. For a detailed proof, please refer to Hillermeier et al. (2001), page 5.
Under a mild KKT condition (Equation (2), Hillermeier et al., 2001), the gradients of all solutions that belong to the Pareto set satisfy $n-m+1$ gradient constraints. Therefore, the dimension of the tangent space of the Pareto set is $n - (n-m+1) = m-1$. This indicates that the dimension of the Pareto set manifold is $m-1$.
For example, the PS for a typical 2-objective optimization problem is a 1-D curve, and the PS for a typical 3-objective optimization problem is a 2-D surface. For the classic ZDT1, no matter how large the decision number $n$ is, its PS is always the 1-D line from [0,0,..,0] to [1,0,..,0].
-------
Reference
Generalized Homotopy Approach to Multiobjective Optimization, Journal of Optimization Theory and Applications. 2001. C. Hillermeier.
> **Q2.** About UMOD "require fewer main populations". I think a good algorithm should work well for adjustable population size, so I am confused about what "require fewer main populations" means.
The claim of "fewer populations" compares UMOD with subset selection in finding $K$ representative solutions. UMOD maintains a population size of $K$, whereas subset selection requires a larger initial pool of solutions to choose the final $K$. Specifically, we choose to use $4K$ solutions for subset selection. Therefore, we say UMOD uses fewer populations than subset selection.
UMOD does work well for a wide range of population sizes. Due to the space limit, we have reported that UMOD works for a large range of different populations size $K$, which is provided in the one-page PDF. If needed, we can report more experimental results.
> **Q3.** Comparison between UMOD and subset-selection.
UMOD offers two key advantages over energy-based subset selection:
- **Single-Phase Algorithm**: Energy-based subset selection involves a "two-phase" process: (1) generating a large number of candidate solutions and (2) performing subset selection. This two-phase approach can be computationally expensive, especially when the candidate pool is large. Additionally, determining the appropriate size of the candidate pool to ensure representative solutions is uncertain. In contrast, UMOD operates as a single-phase algorithm, eliminating the need for an extensive candidate generation phase.
- **Efficient Optimization**: Energy-based subset selection is a discrete optimization problem, which can be computationally intensive. UMOD, on the other hand, employs a gradient-based method for selecting representative solutions, leveraging continuous optimization. Gradient-based methods are generally more efficient and faster compared to discrete optimization problems, leading to more timely and resource-effective solutions.
We argue that training a neural network is not a big trouble and should be widely adopted in multi-objective optimization due to its robustness (see response to Q1, JBGb) and quick training time (75ms).
We will soon compare it with LLSA (Wang et al., 2023). While we currently lack numerical results for LSSA, **Table 3 in the LSSA paper indicates that LSSA underperforms compared to SMS-MOEA on the ZDT4, DTLZ1, DTLZ5, and DTLZ6 problems. In contrast, UMOD consistently outperforms SMS-MOEA on those tasks. The results compared with SMS-MOEA serve as evidence that UMOD performs better than LSSA**.
---
Reference
Enhancing Diversity by Local Subset Selection in Evolutionary Multiobjective Optimization. Wang et al. 2023. IEEE TEC.
> **Q4** When the solutions are not well converged. In such situations, is the PF model meaningless?
**Firstly**, our results show that solutions easily converge to the PF, but uniformity of those solutions is a big issue. Therefore, we use converged Pareto solutions to train PF models in our experiment.
**Secondly**, according to our results, even if solutions are not fully converged, the fitted model remains meaningful. A rough Pareto front model still aids in finding more uniform solutions through solving the optimal preference configuration problem.
> **Q5.** The FE budget seems very high.
This question can be answered from two aspects.
- UMOD is the first method, to our knowledge, to reliably find uniform Pareto objectives, as shown in Table 1 of our response to AxUm, where the spacing indicator is nearly zero. Increasing the FE budget is worthwhile for achieving uniform Pareto objectives.
- The FE budget can be largely improved. Previously, we do not conduct much effort to reduce it. We will provide new results for you soon.
---
Rebuttal Comment 3.1:
Comment: Thank you for your detailed response. I have increased my rating.
---
Rebuttal 4:
Title: Thanks for your score increasing
Comment: Thanks for your score increasing. We are conducting more and more experiments to address your concerns and we will report them to you as soon as possible.
---
Rebuttal 5:
Title: Analysis of different FEs
Comment: **Table 1. Results under different function evaluations. (K=10).**
| Problem | Func. Eval. (FE) | Method | Spacing | Sparsity | HV | Uniform | Soft Uniform | IGD | FD |
|---------|------------|--------|------------|------------|------------|------------|--------------|------------|------------|
| ZDT1 | 15,000 | UMOD | **0.0007** | **0.0266** | **1.0536** | **0.1616** | **0.0167** | **0.0403** | **0.0809** |
| | 15,000 | MOEAD | 0.0506 | 0.0284 | 1.0503 | 0.1175 | -0.0047 | 0.0414 | 0.1353 |
| | 20,000 | UMOD | **0.0009** | **0.0266** | **1.0536** | **0.1613** | **0.0167** | **0.0403** | **0.0810** |
| | 20,000 | MOEAD | 0.0503 | 0.0283 | 1.0505 | 0.1175 | -0.0047 | 0.0414 | 0.1353 |
| | 25,000 | UMOD | **0.0007** | **0.0266** | **1.0537** | **0.1620** | **0.0168** | **0.0403** | **0.0813** |
| | 25,000 | MOEAD | 0.0509 | 0.0285 | 1.0503 | 0.1175 | -0.0047 | 0.0414 | 0.1353 |
| | 40,000 | UMOD | **0.0003** | **0.0266** | **1.0537** | **0.1624** | **0.0167** | **0.0403** | **0.0816** |
| | 40,000 | MOEAD | 0.0504 | 0.0283 | 1.0504 | 0.1175 | -0.0047 | 0.0414 | 0.1353 |
| RE21 | 15,000 | UMOD | **0.0015** | **0.0164** | **1.2513** | **0.1241** | **-0.0202** | **0.0322** | **0.0661** |
| | 15,000 | MOEAD | 0.0845 | 0.0237 | 1.2447 | 0.0721 | -0.0580 | 0.0423 | 0.1792 |
| | 20,000 | UMOD | **0.0013** | **0.0164** | **1.2512** | **0.1250** | **-0.0202** | **0.0322** | **0.0648** |
| | 20,000 | MOEAD | 0.0853 | 0.0239 | 1.2452 | 0.0721 | -0.0581 | 0.0425 | 0.1812 |
| | 25,000 | UMOD | **0.0007** | **0.0164** | **1.2513** | **0.1266** | **-0.0203** | **0.0322** | **0.0643** |
| | 25,000 | MOEAD | 0.0853 | 0.0239 | 1.2452 | 0.0721 | -0.0581 | 0.0425 | 0.1812 |
| | 40,000 | UMOD | **0.0011** | **0.0164** | **1.2520** | **0.1256** | **-0.0202** | **0.0321** | **0.0645** |
| | 40,000 | MOEAD | 0.0853 | 0.0239 | 1.2453 | 0.0721 | -0.0582 | 0.0425 | 0.1812 |
| RE22 | 15,000 | UMOD | **0.0675** | **0.0200** | 1.1802 | **0.0050** | **-0.0800** | **0.0406** | **0.0854** |
| | 15,000 | MOEAD | 0.0460 | 0.0194 | **1.1867** | 0.0601 | -0.0489 | 0.0385 | 0.1197 |
| | 20,000 | UMOD | **0.0676** | **0.0198** | 1.1809 | **0.0036** | **-0.0786** | **0.0402** | **0.0866** |
| | 20,000 | MOEAD | 0.0460 | 0.0194 | **1.1867** | 0.0601 | -0.0489 | 0.0385 | 0.1197 |
| | 25,000 | UMOD | **0.0676** | **0.0198** | 1.1809 | **0.0036** | **-0.0786** | **0.0402** | **0.0866** |
| | 25,000 | MOEAD | 0.0460 | 0.0194 | **1.1867** | 0.0602 | -0.0489 | 0.0385 | 0.1197 |
| | 40,000 | UMOD | **0.0677** | **0.0199** | 1.1822 | **0.0033** | **-0.0792** | **0.0403** | **0.0875** |
| | 40,000 | MOEAD | 0.0460 | 0.0194 | **1.1868** | 0.0601 | -0.0489 | 0.0385 | 0.1197 |
UMOD is integrated within the Pymoo environment, which supports both evolutionary algorithms (popular in multi-objective optimization) and gradient-based ML problems. LSSA is found in the PlatEMO environment, with some differences in function evaluations (FE) between Pymoo and PlatEMO. For bi-objective problems, LSSA requires around 50,000 iterations (100 ( subproblem) x 500 (iterations) ).
Table 1 highlights two key points:
- Without subset selection, MOEA/D fails to produce uniform Pareto solutions even with large iterations, suggesting that **it is worthy to use extra FEs to find the topK uniform solutions**, which is the motivation of this paper.
- **Reducing function evaluations** from 40,000 (our original setting) to 15,000 (K=10) is possible **without significantly impacting performance**. Results remain **robust when FE > 15,000**.
(40,000 is for K=10, while 32,000 (previously mentioned) is for K=8.)
Further improvements the number of FE could be made in three ways:
- 1. Using the last iteration as a warm start for generating uniform solutions, as MOEA/D provides good approximations.
- 2. Making the epoch for calculating the optimal preference configuration adaptive to save computation.
- 3. Using all populations to train the Pareto front rather than the main population (the method currently used).
---
Rebuttal Comment 5.1:
Comment: Thank you for your effort in providing new results. However, the new results are not very convincing to me because the selected problems are too simple. For example, the PS is ZDT1 lies in $x_i=0$ for $i\ge 2$. All the decision variables are clipped to [0,1], so there is actually only one effective decision variable. Therefore, ZDT1 does not need much budget for searching and converging. I suggest the author use DTLZ1 instead of ZDT1 to provide a more convincing experiment setting. I am not asking for new results, and this is only my advice for improving this new experiment if the authors are considering including it in the paper.
---
Rebuttal 6:
Title: Comparison between UMOD and LSSA
Comment: **Table 2. Comparison with LSSA (Wang et al. 2022). IGD, spacing, and sparsity are scaled by 100, while uniformity, soft uniformity, and FD are scaled by 10 for a better illustration.**
| | | HV | IGD | Spacing | Sparsity | Uniform | Soft uniform |FD|
|-------|------|---------------|----------------|---------------|---------------|---------------|----------------|---------------|
| ZDT1 | UMOD | **1.04** | **5.19** | 0.12 | **4.39** | 2.07 | 0.77 | **1.04** |
| | LSSA | 1.04 | 5.22 | **0.1** | 4.45 | **2.09** | **0.78** | 1.04 |
| ZDT2 | UMOD | **0.71** | 5.23 | **0.25** | **4.45** | **2.05** | **0.78** | 1.07 |
| | LSSA | 0.71 | **5.19** | 0.54 | 4.45 | 2.03 | 0.78 | **1.05** |
| ZDT3 | UMOD | 0.89 | 39.8 | 3.57 | **3.45** | **1.2** | **0.3** | 9.22 |
| | LSSA | **0.91** | **38.45** | **3.27** | 3.88 | 1.53 | 0.42 | **8.87** |
| ZDT4 | UMOD | **1.04** | **5.21** | **0.22** | **4.39** | 2.05 | 0.77 | 1.06 |
| | LSSA | 1.02 | 5.41 | 0.34 | 4.49 | **2.06** | **0.79** | **1.05** |
| ZDT6 | UMOD | **0.66** | **4.18** | **0.12** | **2.86** | **1.67** | **0.35** | **0.81** |
| | LSSA | 0.66 | 4.54 | 3.04 | 3.07 | 1.01 | 0.17 | 1.2 |
| DTLZ1 | UMOD | **1.69** | **4.87** | **0.09** | **0.73** | 1.39 | -0.9 | **0.78** |
| | LSSA | 1.42 | 32.77 | 3.06 | 1.93 | **1.91** | **0.47** | 3.67 |
| DTLZ2 | UMOD | **1.07** | **12.1** | **0.52** | **1.61** | **3.17** | 1.18 | 2.37 |
| | LSSA | 1.07 | 12.23 | 2.2 | 2.17 | 2.96 | **1.21** | **2.22** |
| DTLZ3 | UMOD | **1.06** | **12.3** | **1.93** | **1.7** | 2.72 | 1.13 | **2.39** |
| | LSSA | 0 | 1345.87 | 281.52 | 923.37 | **6.72** | **3.45** | 136.54 |
| DTLZ4 | UMOD | **1.07** | **12.1** | **0.83** | **1.77** | **3.07** | 1.18 | 2.41 |
| | LSSA | 1.06 | 12.39 | 2.29 | 2.34 | 2.99 | **1.23** | **2.32** |
| DTLZ5 | UMOD | **0.72** | **0.63** | **0.7** | 0.03 | 0 | -3.62 | **0.24** |
| | LSSA | 0.7 | 2.67 | 0.51 | **1.11** | **0.96** | **-0.72** | 0.54 |
| DTLZ6 | UMOD | 0.69 | 3.23 | **1.06** | 0.06 | 0 | -3.65 | **0.46** |
| | LSSA | **0.7** | **2.68** | 1.16 | **1.11** | **0.87** | **-0.73** | 0.57 |
Thank you for mentioning LSSA as a strong baseline. The code is available at [GitHub](https://github.com/ilog-ecnu/LSSA) and uses PlatEMO version 3.4. Key observations from Table 1 are:
- UMOD outperforms LSSA on most problems, particularly for the IGD indicator, where UMOD excels in 8 out of 11 cases.
- Our reimplementation shows that LSSA struggles with a small population size, notably on ZDT5 and DTLZ3, which are one of the focus of this paper.
- LSSA does outperform UMOD on ZDT3 (disconnected case), and we plan to adapt UMOD for such scenarios.
Additionally, UMOD offers novel theoretical insights, including the relationship between IGD, fill distance, and max-min distance, along with error analysis. Beyond evolutionary problems, which are a part of this paper, UMOD also effectively addresses large-scale machine learning challenges using pure gradient-based methods.
------
Reference
[1] Enhancing Diversity by Local Subset Selection in Evolutionary Multiobjective Optimization. Wang et al. IEEE TEC. 2022.
---
Rebuttal Comment 6.1:
Comment: Thank you for your response. The results are very impressive. Can you provide some details about the settings? For example, the number of solutions, the number of decision variables, the number of objective functions, and the FE budget.
---
Rebuttal 7:
Title: Detailed Settings of the Comparison Experiments between LSSA and UMOD
Comment: Thank you for your consistent support and interest in our work.
The settings compared with LSSA are the same as in our main paper (Table 5, Appendix). We will include all additional settings discussed during the rebuttal.
**Table 3. Settings of the comparison between LSSA and UMOD.**
| | Bi-objective | Three-objective (DTLZ 1-4) | Degenerated three-objective (DTLZ 5-6) |
|---------------------------|--------------|-----------------------------|----------------------------------------|
| Number of solutions | 8 | 21 | 16 |
| Number of seeds | 31 | 31 | 31 |
| Func. Eval. (FE) | 32,000 | 126,000 | 96,000 |
| Number of decision variables | 30 | 30 | 30 |
**Table 4. Software for LSSA and UMOD.**
| | LSSA | UMOD (for MOEA) |
|----------|-------------|-----------------|
| Software | PlatEMO 3.4 | Pymoo 0.6.1.1 |
The FE numbers and UMOD results follow those in our main paper. As discussed, the **FE budget can be significantly reduced**. For example, we reduced the FE for the bi-objective case from 32,000 to 12,000 **without affecting UMOD's performance**. Improved FE results for three-objective problems will be released soon.
---
Rebuttal 8:
Comment: Thank you for your response, and thank you for your effort in improving the empirical study. However, I am confused about some of the numbers you reported. The PF of DTLZ5 and DLTZ6 is the same. However, the IGD of UMOD on DTLZ5 is 0.63, and the IGD of UMOD on DTLZ6 is 3.23. Why do they differ so much on the same PF? Moreover, is the number in the table an average or median?
In addition, the population size used in your experiment is significantly smaller than that employed in other studies. In practical applications, a larger population is often necessary to adequately approximate an irregular or high-dimensional PF. I am curious about whether UMOD encountered difficulties when using a larger population size, such as 100 solutions on 3-objective problems.
---
Rebuttal 9:
Title: Results comparison with different FE on three objective problems.
Comment: **Table 5 Results comparison with different FE on three objective problems.**
| Problem | Iterations | Method | Spacing | Sparsity | HV | Uniform | Soft Uniform | IGD | MaxGD |
|---------|------------|--------|-----------|-----------|-----------|-----------|--------------|-----------|-----------|
| DTLZ1 | 63,000 | UMOD | 0.0079 | **0.0067**| **1.6926**| 0.1181 | -0.0903 | 0.0491| 0.0861 |
| | 63,000 | MOEAD | **0.0002**| 0.0075 | 1.6917 | **0.1408**| **-0.0899** | **0.0488** | **0.0750**|
| | 84,000 | UMOD | 0.0044 | **0.0070**| **1.6929**| 0.1274 | -0.0902 | 0.0489 | 0.0857 |
| | 84,000 | MOEAD | **0.0002**| 0.0075 | 1.6917 | **0.1408**| **-0.0899** | **0.0488**| **0.0750**|
| | 105,000 | UMOD | 0.0031 | **0.0071**| **1.6929**| 0.1315 | **-0.0900** | 0.0488 | 0.0843 |
| | 105,000 | MOEAD | **0.0002**| 0.0075 | 1.6914 | **0.1408**| -0.0900 | **0.0488**| **0.0750**|
| | 126,000 | UMOD | 0.0025 | **0.0072**| **1.6929**| 0.1331 | **-0.0900** | 0.0488 | 0.0820 |
| | 126,000 | MOEAD | **0.0002**| 0.0074 | 1.6913 | **0.1404**| -0.0901 | **0.0487**| **0.0750**|
| DTLZ2 | 63,000 | UMOD | **0.0151**| **0.0174**| **1.0637**| **0.2783**| **0.1155** | **0.1223**| **0.2387**|
| | 63,000 | MOEAD | 0.0545 | 0.0243 | 1.0624 | 0.2433 | 0.0947 | 0.1263 | 0.2514 |
| | 84,000 | UMOD | **0.0044**| **0.0177**| **1.0625**| **0.3180**| **0.1178** | **0.1218**| **0.2370**|
| | 84,000 | MOEAD | 0.0546 | 0.0243 | 1.0626 | 0.2433 | 0.0947 | 0.1263 | 0.2515 |
| | 105,000 | UMOD | **0.0070**| **0.0173**| **1.0661**| **0.3100**| **0.1178** | **0.1222**| **0.2433**|
| | 105,000 | MOEAD | 0.0546 | 0.0243 | 1.0625 | 0.2433 | 0.0947 | 0.1263 | 0.2515 |
| | 126,000 | UMOD | **0.0042**| **0.0169**| **1.0730**| **0.3237**| **0.1204** | **0.1250**| **0.2477**|
| | 126,000 | MOEAD | 0.0546 | 0.0243 | 1.0626 | 0.2433 | 0.0947 | 0.1263 | 0.2515 |
| DTLZ3 | 63,000 | UMOD | **0.0150**| **0.0184**| 1.0531 | **0.2937**| **0.1177** | **0.1237**| 0.2576|
| | 63,000 | MOEAD | 0.0545 | 0.0243 | **1.0564**| 0.2439 | 0.0958 | 0.1267 | **0.2524** |
| | 94,500 | UMOD | **0.0064**| **0.0183**| 1.0586 | **0.3134**| **0.1208** | **0.1236**| **0.2266**|
| | 94,500 | MOEAD | 0.0546 | 0.0244 | **1.0601**| 0.2436 | 0.0950 | 0.1264 | 0.2518 |
| | 126,000 | UMOD | **0.0025**| **0.0176**| **1.0610**| **0.3213**| **0.1177** | 0.1219| **0.2351**|
| | 126,000 | MOEAD | 0.0546 | 0.0244 | 1.0602 | 0.2436 | 0.0950 | **0.1157** | 0.2749 |
| DTLZ4 | 63,000 | UMOD | **0.0142**| **0.0231**| **1.0711**| **0.3057**| **0.1194** | 0.1252| **0.2672**|
| | 63,000 | MOEAD | 0.0546 | 0.0243 | 1.0626 | 0.2434 | 0.0947 | **0.1155** | 0.2748 |
| | 94,500 | UMOD | **0.0045**| **0.0178**| **1.0645**| **0.3181**| **0.1195** | 0.1233 | **0.2326**|
| | 94,500 | MOEAD | 0.0547 | 0.0243 | 1.0627 | 0.2433 | 0.0947 | **0.1155** | 0.2748 |
| | 126,000 | UMOD | **0.0024**| **0.0170**| **1.0631**| **0.3226**| **0.1178** | 0.1223| **0.2544**|
| | 126,000 | MOEAD | 0.0546 | 0.0243 | 1.0626 | 0.2433 | 0.0947 | **0.1155** | 0.2748 |
In previous discussion, we have identified additional methods to further reduce FE and will incorporate them in the revised paper. Even with the original strategy, FE can be reduced to approximately 63,000 from 126,000 for 21 solutions. Table 5 highlights key insights:
- For the DTLZ1 problem, as shown in the main paper, uniform preferences yield uniform Pareto objectives. UMOD and MOEAD perform similarly, indicating that UMOD generates nearly uniform Pareto objectives.
- While further improvement beyond 63,000 FE is minimal, UMOD significantly outperforms MOEAD on DTLZ2 and DTLZ4.
---
Rebuttal 10:
Title: We choose a small population because it is hard!
Comment: We chose a small population to test whether a few solutions can approximate the entire Pareto front (PF). With a larger population, fitting the network is easier. **Our key point is that even with fewer solutions, UMOD with a neural model produces uniform Pareto objectives**.
We use mean in Table 2.
---
Rebuttal 11:
Title: Results on DTLZ5 and 6
Comment: You can view our visual results on the one-page summary, where results on DTLZ5 and 6 are accurate. DTLZ5 and DTLZ6 were run on a new computer. However, there's a minor bug in the IGD function of the pymoo library (line 256 in dtlz.py), where the default partition is set to 15. This causes numerical instability in approximating the PF and IGD. We've adjusted it to `n_partitions=50`. This line of code has been changed to `ref_dirs = UniformReferenceDirectionFactory(3, n_partitions=50).do() # Original is 15`.
We are aware of this issue, and all other results are correct (they are by our main PC). We will re-run the IGD values for DTLZ5 and DTLZ6.
New results will be updated soon.
---
Rebuttal Comment 11.1:
Comment: Thank you for your response. I have checked the code of PyMOO. I carefully read `dtlz.py` at https://github.com/anyoptimization/pymoo/blob/main/pymoo/problems/many/dtlz.py. I think the setting of `n_partitions` does not affect the results on DTLZ5 and DTLZ6 because `ref_dirs = UniformReferenceDirectionFactory` is only used in DTLZ1-4. Moreover, I am also very confused as to why running on a new computer could result in the wrong result.
---
Rebuttal Comment 11.2:
Title: About Source Code of UMOD
Comment: I was trying to run the code uploaded by the authors, but I noticed that they only uploaded the `umod.py` file without other critical libraries and files like `xy_util` and `mo_util`, so the code cannot run actually. Could you please provide a full program?
---
Reply to Comment 11.2.1:
Title: Source Code of UMOD
Comment: When we submitted the code, there were still some privacy issues in the project that we hadn't had time to address. Therefore, we mainly provided the main file to explain the algorithm's running process.
For UMOD solving multitask learning problems, recently, there is a open-sourcing implement of UMOD in the libmoon framework. Code is available at [here](https://github.com/xzhang2523/libmoon/blob/main/libmoon/tester/run_mtl_clean_pref.py) by setting `solver` as `uniform`.
---
Rebuttal Comment 11.3:
Title: Results on DTLZ5 and 6
Comment: When calculating the Pareto front (PF) for DTLZ5 and DTLZ6, we encountered an intriguing bug. After thorough investigation, we discovered that results may vary on different PCs due to a minor issue in pymoo (version 0.6.1.1). When using `Remote.get_instance().load("pymoo", "pf", "dtlz5-3d.pf")` to calculate the PF, the file is loaded from `A\\envs\\B\\Lib\\site-packages\\pymoo\\data\\pymoo\\pf\\dtlz5-3d.pf`, where 'A' is the anaconda installation directory and 'B' is the environment name.
Interestingly, the `dtlz5-3d.pf` file is 387KB, while `dtlz6-3d.pf` is only 32KB, indicating that the PF for DTLZ6 is incomplete. To resolve this issue, we used the `dtlz5-3d.pf` file for both DTLZ5 and DTLZ6 calculations.
The minor bug in pymoo has been fixed. We will report the DTLZ5/6 results and the performance of UMOD with large populations soon.
---
Rebuttal 12:
Comment: After correcting the IGD calculation, the revised DTLZ5 and DTLZ6 results are as follows.
**Table 6. Results on DTLZ5.**
| | Spacing | Sparsity | HV | Uniform | Soft uniform | IGD | FD |
|----------|------------|------------|------------|------------|--------------|------------|------------|
| UMOD | **0.0073** | **0.0131** | 0.6995 | **0.0867** | **-0.0634** | **0.0294** | **0.0850** |
| MOEAD | 0.0871 | 0.0645 | 0.6352 | 0.0000 | -0.1978 | 0.1430 | 0.3002 |
| AWA | 0.0909 | 0.0312 | 0.6697 | 0.0000 | -0.1802 | 0.0712 | 0.1697 |
| SMS-MOEA | 0.0454 | 0.0147 | **0.7035** | 0.0783 | -0.0753 | 0.0307 | 0.1099 |
| NSGA3 | 0.0599 | 0.0289 | 0.6623 | 0.0005 | -0.1417 | 0.0683 | 0.1802 |
**Table 7. Results on DTLZ6.**
| | Spacing | Sparsity | HV | Uniform | Soft uniform | IGD | FD |
|----------|------------|------------|------------|------------|--------------|------------|------------|
| UMOD | **0.0125** | **0.0128** | 0.7011 | **0.0738** | **-0.0618** | **0.0285** | **0.0731** |
| MOEAD | 0.0843 | 0.0599 | 0.6352 | 0 | -0.2032 | 0.143 | 0.3002 |
| AWA | 0.0908 | 0.0312 | 0.6697 | 0 | -0.1694 | 0.0712 | 0.1697 |
| SMS-MOEA | 0.0426 | 0.0143 | **0.7036** | 0.072 | -0.0752 | 0.0305 | 0.1099 |
| NSGA3 | 0.0524 | 0.0415 | 0.6623 | 0 | -0.1357 | 0.0931 | 0.2972 |
Table 6/7 highlights that UMOD significantly outperforms other methods in ensuring evenly distributed solutions. While NSGA3, MOEA/D, and MOEA/D-AWA produce duplicate solutions, SMS-MOEA is the only method with a comparable HV to UMOD. However, UMOD surpasses SMS-MOEA considerably in IGD and FD.
---
Rebuttal 13:
Title: Numerical results for larger K
Comment: **Table 6. Results for large solution numbers (K).**
| Problem | Method | Spacing | Sparsity | HV | Uniform | Soft uniform | IGD | FD |
|-----------------|----------|------------|------------|------------|------------|--------------|------------|------------|
| ZDT1 (sols=50) | UMOD | **0.0018** | **0.0009** | **1.0973** | **0.0270** | **-0.2381** | **0.0072** | **0.0156** |
| | MOEAD | 0.0129 | 0.0010 | 1.0964 | 0.0215 | -0.2458 | 0.0078 | 0.0347 |
| | AWA | 0.0130 | 0.0011 | 1.0961 | 0.0215 | -0.2431 | 0.0079 | 0.0347 |
| | SMS-MOEA | 0.0052 | 0.0009 | 1.0972 | 0.0195 | -0.2390 | 0.0075 | 0.0173 |
| | NSGA3 | 0.0168 | 0.0012 | 1.0962 | 0.0217 | -0.2456 | 0.0075 | 0.0361 |
| ZDT2 (sols=50) | UMOD | **0.0016** | 0.0009 | 0.7639 | **0.0239** | **-0.2374** | **0.0074** | **0.0162** |
| | MOEAD | 0.0049 | **0.0009** | 0.7632 | 0.0198 | -0.2379 | 0.0075 | 0.0167 |
| | AWA | 0.0049 | 0.0009 | 0.7632 | 0.0202 | -0.2380 | 0.0075 | 0.0167 |
| | SMS-MOEA | 0.0098 | 0.0010 | **0.7642** | 0.0207 | -0.2405 | 0.0090 | 0.0404 |
| | NSGA3 | 0.0048 | 0.0009 | 0.7636 | 0.0206 | -0.2379 | 0.0075 | 0.0172 |
| DTLZ1 (sols=91) | UMOD | 0.0009 | 0.0006 | 1.7016 | 0.0553 | -0.2839 | 0.0203 | 0.0341 |
| | MOEAD | **0.0003** | **0.0007** | 1.7002 | **0.0581** | **-0.2843** | **0.0203** | **0.0328** |
| | AWA | 0.0086 | 0.0007 | 1.7002 | 0.0000 | -0.2837 | 0.0205 | 0.0531 |
| | SMS-MOEA | 0.0092 | 0.0044 | 1.6930 | 0.1064 | -0.0926 | 0.0491 | 0.1013 |
| | NSGA3 | 0.0005 | 0.0006 | **1.7017** | 0.0566 | -0.2842 | 0.0203 | 0.0333 |
| DTLZ2 (sols=91) | UMOD | **0.0054** | **0.0013** | 1.1446 | **0.1276** | **-0.1607** | 0.0542 | **0.1023** |
| | MOEAD | 0.0277 | 0.0013 | 1.1406 | 0.0896 | -0.1688 | **0.0535** | 0.1131 |
| | AWA | 0.0255 | 0.0013 | 1.1414 | 0.0759 | -0.1678 | 0.0537 | 0.1130 |
| | SMS-MOEA | 0.0335 | 0.0018 | **1.1510** | 0.0443 | -0.1967 | 0.0797 | 0.1928 |
| | NSGA3 | 0.0273 | 0.0013 | 1.1419 | 0.0903 | -0.1683 | 0.0536 | 0.1134 |
From Table 6, we can find that, UMOD outperforms previous MOEA methods on ZDT1, ZDT2, and DTLZ2 problems, particularly in covering radius and evenly spread indicators. Since DTLZ1 is the only problem that uniform preferences induces uniform Pareto objectives, MOEA/D with uniform preferences perform the best on DTLZ1. Despite not directly optimizing IGD, UMOD significantly excels on IGD indicators, especially for ZDT1 and ZDT2. We will incorporate these results in our paper.
If you have any further concerns, please let us know.
---
Rebuttal Comment 13.1:
Comment: Thank you for your results. Could you report the FE budget of this experiment?
---
Rebuttal 14:
Title: FE for many-solution problems
Comment: Dear Reviewer N5xS,
For bi-objective problems, the FE count is 75,000 (50 * 1500 generations), similar to problems with fewer sub-problems. For three-objective problems, the FE count is 273,000 (91 * 3000 generations). As mentioned in our response under "Analysis of Different Iterations," three methods could significantly reduce FE, allowing for further reductions.
We also wish to clarify our paper's motivation:
- We aim to propose an (the first) optimizable indicator for MOO that, when optimized, ensures the algorithm returns uniformly distributed Pareto objectives. Optimizing such indicator is elegant and easy to understood (just like optimizing the HV), while methods like LSSA needs some complex mechanisms. Optimizing this indicator yields uniform solutions for both evolutionary and gradient-based problems. Our experiments confirm we achieved this goal.
- Our goal for uniform Pareto objectives is to cover the entire Pareto front, significantly outperforming in terms of FD (covering radius). The main paper focuses on using a small population to cover the whole PF (a harder case). In the rebuttal, we demonstrate that UMOD also performs well with a larger population (K).
- While there is room for technical and engineering improvements to reduce FE, our primary focus is on the original idea, theoretical results, and designing a practical algorithm. We will actively leverage techniques to reduce FE.
- For the machine learning problems addressed in this paper, function evaluation is relatively inexpensive, making FE less critical in gradient-based methods. UMOD is the first method to find uniform Pareto objectives with low computation.
If you find our rebuttal helpful, we kindly ask you to consider raising your score for our paper. Thank you for your encouraging and constructive feedback.
Best regards, 5680 authors
---
Rebuttal Comment 14.1:
Comment: Thank you for your response. With your patient explanation and additional results, I think now I have a more comprehensive understanding of your work. I increased my rating to 6, and I think this paper can be weakly accepted. I do not require further clarification.
Moreover, I strongly suggest the authors include a more comprehensive empirical study in the camera-ready version (if accepted). Currently, most of the presented results use a very small population size. I acknowledge that it is usually a difficult task to perform well with a small population size, but I think a good MOA should be compatible with a wide range of population sizes. To my knowledge, most related work may adopt a population size of around 100 for 3-obj problems. Additionally, this work tends to adopt a very large FE. UMOD performs well with a large FE because it can make use of extensive evaluations to refine the distributions, thus achieving a better benchmark performance, but MOEA/D cannot. I think a wide range of audiences may be interested in the performance of UMOD with a larger population size and a smaller FE.
Another limitation of this work is it does not touch on the real complex PFs. The PF of the adopted problems is very simple. Although the reviewers are interested in the results of DTLZ7, the authors refused to present them. This work does not provide new insights into how to deal with complicated PFs, which I think is a very important topic with high practical value. The authors said the theories do not hold for a disconnected PF, so the proposed method is not applicable. I suspect the real limitation may lie in the PF learning. The PF learning mechanism in this paper does not seem to be able to learn a disconnected PF. The current model cannot model the boundary of the PF; thus if the weight vectors go outside the boundary, uniformity can no longer be guaranteed. I recommend an interesting test suite called IMOP, which contains many extremely complicated PFs. The authors could adopt these problems for future research.
Last, I would ask the authors to include a more comprehensive summary of the limitations of UMOD in the camera-ready version. I also suggest the authors double-check the experimental details such as the reference set for IGD. PyMOO is not a well-verified library, so it may have a lot of bugs.
---
Rebuttal 15:
Title: Final conclusion
Comment: Dear reviewer,
Thanks a lot for pointing necessary references (e.g., LSSA), scoring raising, and your consist support for our work. We want to make a final response to your concerns, and we guarantee we will include these issues in out paper, since you have spent so much previous time and professional suggestions.
Here is our final response to your comments:
- **Comprehensive empirical study**: We will add extensive empirical studies, including all ablation and sensitivity analyses, beyond what was presented during the rebuttal.
- **Larger population and FE issues**: We will include results for both small and large populations in the main paper and discuss the impact of different FEs in MOEAs to appeal to a broader audience.
- **Complex PF**: Handling complex Pareto fronts (PF) is critical in the MOEA community. UMOD is well-suited for degenerated PFs (e.g., DTLZ5/6) in the decomposition framework, as it helps find preferences corresponding to representative solutions. For disconnected PFs, the challenges include (1) improving PF model estimation with techniques like self-organized mapping networks, and (2) optimizing the max-min distance, which is more difficult.
- **Limitations**: We will discuss limitations in detail, including the use of more FEs and the ability to handle complex PFs. | Summary: The paper presents a new approach to effectively represent the entire Pareto front in multi-objective optimization problems. The authors propose using fill distance as a new metric for uniformity in MOO, addressing the challenge of quantifying the representativeness of design points.
Strengths: - The paper presents a novel approach to MOO by introducing the concept of fill distance and a surrogate objective function.
- The UMOD algorithm provides bounds on the optimization error. Besides, it demonstrates that as the size of the solution set K increases, the obtained Pareto set will asymptotically converge to a uniform distribution over the Pareto front, which ensures the effectiveness of the algorithm
- The paper makes comprehensive empirical evaluations.
Weaknesses: - The proposed method relies on a neural network to approximate the Pareto front. The performance of the method could be sensitive to the quality of the neural network model, its training data, and hyperparameters.
- While the paper claims that UMOD is efficient, there is limited discussion on the method to scale to larger problems.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The paper mentions the use of neural networks for approximating the Pareto front. Could the authors elaborate on the choice of the neural network architecture and how it impacts the optimization process?
- Are there any additional limitations or specific considerations for higher-dimensional Pareto fronts?
- While the paper demonstrates the method on several benchmark problems, what are the scalability considerations when applying UMOD to larger, more complex problems, especially those with a high number of objectives or decision variables?
- It would be interesting to know how the algorithm performs on DTLZ5-7, where the assumptions are not satisfied.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have acknowledged the complexity of the optimization landscape introduced by the neural network. They could further discuss how their method handles cases where the Pareto front is not well-approximated by the neural network or where the front is highly irregular.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank for the detailed feedbacks from the reviewer. We hope our following response can address your concerns more or less.
Since W1 and Q1 are related. We combine the answers here.
**W1. The proposed method relies on a neural network to approximate the Pareto front. The performance of the method could be sensitive to the quality of the neural network model, its training data, and hyperparameters.\
Q1. Could the authors elaborate on the choice of the neural network architecture and how it impacts the optimization process?**
Neural networks in this paper serve as a universal approximator of the continuous mapping function from preference angles to Pareto objectives. Although it is a feature of NN that it will suffer from the sensitivity to the architectures, training data, and hyperparameters, this feature will allow substantial improvement of the NN performance through careful investigation of the problem structure, which is widely accepted in modern machine learning.
On the other hand, the experiments show that indeed the performance of NN is empirically robust. Here, we construct 6 NN architectures and exam the performance sensitivity.
| Model | Architecture |
|-------|-----------------------------------------------------------------------------------|
| $M_1$ | $(m-1) - 256 - 256 - 256 - m$ |
| $M_2$ | $(m-1) -128 - 128 - 128 - m$ |
| $M_3$ | $(m-1) - 64 - 64 - 64 - m$ |
| $M_4$ | $(m-1) - 256 - 256 - m$ |
| $M_5$ | $(m-1) - 128 - 128 - m$ |
| $M_6$ | $(m-1) - 64 - 64 - m$ |
**Table 1. Model architecture. Non-linear activation functions are ReLU.**
The table shows that network structure minimally impacts the final results. Due to space constraints, experiments on other problems are not reported here but will be included in the revised paper.
| Architecture | Spacing$\downarrow$ | Sparsity$\downarrow$ | HV$\uparrow$ | IGD$\downarrow$ | FD$\downarrow$ |
|--------------|---------|----------|-------|-------|--------|
| M1 | 0.0030 | 0.0445 | **0.7120**| 0.0522| 0.1051 |
| M2 | 0.0015 | **0.0445** | 0.7119| **0.0522**| 0.1045 |
| M3 | **0.0011** | 0.0445 | 0.7119| 0.0523| 0.1054 |
| M4 | 0.0016 | 0.0445 | 0.7119| 0.0523| 0.1052 |
| M5 | 0.0013 | 0.0445 | 0.7119| 0.0523| **0.1043** |
| M6 | 0.0054 | 0.0445 | 0.7116| 0.0522| 0.1078 |
**Table 2. Results on various models for the ZDT1 problem.**
Then, since W2 and Q3 are related. We answer then together.
**W2. While the paper claims that UMOD is efficient, there is limited discussion on the method to scale to larger problems.\
Q3. What are the scalability considerations when applying UMOD to larger, more complex problems, especially those with a high number of objectives or decision variables.**
In Section 4.2 (Pages 8-9), we extend UMOD to solve **large-scale** machine learning problems, specifically fairness classification problems with around **2891/6811** neural network parameters as **decision variables**.
These experiments demonstrate that UMOD can handle very **large-scale** problems efficiently. During optimization, we only need to fit a model where the input is the preference angle and the output is the Pareto objectives, with dimensions $m-1$ and $m$, respectively, independent of the number of decision variables $n$.
Additionally, generating new preference angles has complexity related to $m$ rather than $n$. Considering $n \geq m$, those two operations are efficient. Apart from these operations, UMOD functions similarly to standard MOEA/D, contributing to its efficiency.
Also, in Section 4.1, we have also conducted experiments on **four-objective** problems, namely RE41 and RE42. Our visual results (Full results see Figure 8 and Figure 9) demonstrates that UMOD performs baseline methods a lot in finding uniform K Pareto solutions on the entire PF.
**Q2. Are there any additional limitations or specific considerations for higher-dimensional Pareto fronts.**
For high-dimensional Pareto fronts, we donot have very specific designs and we note three key differences: (1) More samples are needed to train the Pareto front model, so the number $K$ of candidate solutions should be larger; (2) Training iterations of the Pareto front model are increased; (3) Preference angle adaptation iterations are also increased.
**Q4. It would be interesting to know how the algorithm performs on DTLZ5-7, where the assumptions are not satisfied.**
For discussion on DTLZ5-7 problems, please refer to our general response 2.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. My concerns have been addressed. I have decided to maintain my original positive score.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback
Comment: We sincerely appreciate your positive feedback and are pleased that we were able to address your concerns. | Summary: In this paper, the authors focus on finding K uniform Pareto-optimal points that are capable of representing the entire Pareto front and propose a new metric, namely fill distance to quantify the effectiveness of these K points. To minimize the fill distance easily, the authors adopted a surrogate model, called 'max-min'. Furthermore, the authors design an effective multi-objective algorithm, namely UMOD.
Strengths: The paper introduces a novel way of generating K uniform Pareto objectives, which is quite original. The paper has a good quality. The authors provided a rigorous proof process for the rationality of using the surrogate model, and the experimental results verified the effectiveness of the designed UMOD. The presentation of the paper is clear. The contribution of the paper is thus significant.
Weaknesses: Section 2.1 and 2.2: What are the drawbacks of the existing methods for generating diverse solutions? You need to have a simple summary. The motivation of your design is not very clear without saying the drawbacks of existing methods.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. I know the situation of solutions reflected by different metrics (such as the convergence of IGD and the uniformity of Fill Distance), but is there a consistent relationship between the classic IGD indicator and the Fill Distance indicator? (Calculated results listed in Tables 1 and 2 seem to confirm this viewpoint)
2. Is it necessary to have the Fill Distance indicator? What impact will it have on measuring the performance of different algorithms? What adverse effects will the lack of this indicator have?
3. I am curious whether it is reasonable to generate K uniformly distributed points from PF based on Euclidean distance and use them as representative solutions for PF. Especially for irregularly distributed PF situations. In practical problems, optimization problems often contain several complex constraint conditions, leading to scattered arrangements when the solution space is mapped to the target space, resulting in many irregular distributions of PF.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Authors can summarize the drawbacks or shortcomings of existing indicators and delve into the necessity of the Fill Distance metric, as well as the impact it will have on evaluating the fairness of different algorithm performance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for pointing out key passages that could be complemented with the necessary background for a broader audience.
**W1. What are the drawbacks of the existing methods for generating diverse solutions? You need to have a simple summary.** \
For indicators, please refer our response to Reviewer AxUm in W1 and W2. We will include the response as a simple summary in Appendix B1.
**Q1. Is there a consistent relationship between the classic IGD indicator and the Fill Distance indicator?**
The detailed relationship between IGD and FD can be found in our general response 1. Thank you for the insightful question. We will add the discussion to the next revision.
**Q2. Is it necessary to have the Fill Distance indicator? What impact will it have on measuring the performance of different algorithms? What adverse effects will the lack of this indicator have?**
The related discussion can be found in our general response 1. We argue FD is a better metric than its $L_1$ counterpart IGD. FD has better theoretical properties than IGD, especially in ensuring each point in the Pareto front can essentially be well represented (while IGD may dismiss some points).
Specifically for your **first** sub-question, we believe the FD indicator is necessary. Briefly:
- The FD indicator represents the entire Pareto front. Optimizing it provides the minimal covering radius, making it a space-filling design .
- FD is robust to the distribution of Pareto solutions approximating the whole front.
- We establish a rate-optimal design between FD and maximizing the minimal pairwise distance, which is easier to optimize, and provide a detailed solution distribution for this design.
For your **second** sub-question, since the FD indicator measures uniformity, a large FD indicator means the solutions generated by the algorithm are generally more uniform.
For your **last** sub-question, the main weakness is that to measure the level of uniformity, the FD indicator assumes the Pareto front is connected, which may not be suitable for disconnected problems like DTLZ7.
**Q3. I am curious whether it is reasonable to generate K uniformly distributed points from PF based on Euclidean distance and use them as representative solutions for PF.Especially for irregularly distributed PF situations. In practical problems, optimization problems often contain several complex constraint conditions, leading to scattered arrangements.**
We use Euclidean distance because when $K$ is large enough it will well approximate geodesic distance on manifolds; in most experiments the strategy works well. However, due to the theoretical limitation (Theorem 3/4 assumes compactness and connectivity), UMOD does struggle with a disconnected Pareto front. For irregular, disconnected Pareto fronts, a potential solution is to generate uniformly distributed Pareto solutions on each segment separately. We will investigate this approach in the future work.
**L1. Authors can summarize the drawbacks or shortcomings of existing indicators and delve into the necessity of the Fill Distance metric, as well as the impact it will have on evaluating the fairness of different algorithm performance.**
Thanks for your advice. To address your concern, we conduct comparisons with the hypervolume indicator, the sparsity indicator, and the IGD indicator.
- Hypervolume is a coarse uniformity metric as HV maximization brings uncontrollable configurations of Pareto solutions. As remarked in Line 124-125, only a linear Pareto front results in equally spaced solutions when maximizing hypervolume.
- The sparsity indicator considers the sum of squared distances between neighboring Pareto objectives. However in problems with more than two objectives, this indicator lacks a clear interpretation.
- For IGD, please refer to our general response 1. In summary, it is a weaker metric than FD.
For the impact of FD on evaluation, we argue FD is a better metric than its $L_1$ counterpart IGD. FD has better theoretical properties than IGD, especially in ensuring each point in the Pareto front can essentially be well represented (while IGD may dismiss some points).
Specifically,
- The FD indicator represents the entire Pareto front. Optimizing it provides the minimal covering radius, making it a space-filling design (Joseph et al, 2016).
- FD is robust to the distribution of Pareto solutions approximating the whole front.
- We establish a rate-optimal design between FD and maximizing the minimal pairwise distance, which is easier to optimize, and provide a detailed solution distribution for this design.
We will incorporate the discussion into the next revision.
-----------------------------------------------------
Reference
Space-Filling Designs for Computer Experiments: A Review. Joseph et al. 2016.
---
Rebuttal Comment 1.1:
Comment: The authors have provided detailed responses to my comments, including theoretical rigorous proof, and detailed textual supplements.
I think the revised paper is acceptable.
Additionally, I strongly encourage the authors to conduct similar research on multi-objective combinatorial optimization problems. In such problems, PF is discrete and discontinuous, and current practices generally use a set of non-dominated solutions from different algorithms to replace PF. Your research results may lead to many interesting discoveries in this field.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the encouraging response as well as the additional suggestion.
We agree that examining the performance on multi-objective combinatorial optimization problems would be interesting. We note, however, considering the disconnected Pareto front thereof, the proposed method theoretically may be inappropriate. Nevertheless, we think that our method would be an interesting baseline; as we mentioned in the general response, we are more than willing to explore the potential recipe "run UMOD on each segment separately" in the future work. | Summary: This study addresses the problem of generating K uniform Pareto-optimal solutions for multi-objective optimization problems. It introduces a new metric, fill distance, to evaluate the uniformity of the solution set on the Pareto front. This metric is then used as the objective for evolutionary optimization, where a surrogate for fill distance is first proposed, and the problem is then solved using a neural network-based bi-level optimization method.
Strengths: 1. A new metric is proposed to evaluate the uniformity of the solution set on the Pareto front.
2. Some theoretical analysis is provided for the surrogate of fill distance.
Weaknesses: 1. While there are many metrics for evaluating the uniformity of the solution set on the Pareto front, they are not discussed, and the proposed fill distance is not compared with their ability to assess uniformity.
2. There is no discussion on how the proposed fill distance ensures that it can (at least to some extent) accurately evaluate the uniformity of the solution set on the Pareto front.
3. The standard benchmark set only considers ZDT and DTLZ problems, omitting more complex problem instances.
Technical Quality: 2
Clarity: 1
Questions for Authors: 1. How sensitive is the algorithm to the initialization method for K preferences?
2. What are the settings for the algorithms in the experiments? For instance, what are the maximum fitness evaluations and the number of independent runs for each algorithm?
3. Were statistical tests performed to compare the algorithm's performance?
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: It is recommended to provide a sensitivity analysis of the algorithm concerning different values of K.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable reviews and hope our response addresses your concerns.
**W1. Fill distance (FD) is not discussed and compared with other metrics for achieving and access uniformity.**
We beg to argue some discussion has been provided.
1. **Hypervolume (HV)**: As noted in lines 124-125, HV is a coarse uniformity metric. It results in equally spaced solutions **only for a linear Pareto front**, while for a non-linear PF, the configuration of K solutions is generally unknown.
2. **Sparsity**. The sparsity indicator considers the sum of squared distances between neighboring Pareto objectives in the **non-dominated sorted order**. However, summing in the non-dominated sorted order does not have a directly meaning for (more than) three-objective problems.
3. **IGD**. Difference between FD and IGD please refer to general response 1.
Moreover, directly using FD and IGD as indicators for a uniform configuration is challenging because they assume the true PF is known. In practice, we optimize a surrogate max-min distance for FD. The table below highlights the differences between optimizing the FD surrogate and optimizing HV to guide preference adjustment in UMOD. Most uniformity indicators show that optimizing the FD surrogate yields better results.
**Table 1. Comparison between using FD and HV in UMOD.**
| Problem | Method | Spacing $\downarrow$ | Sparsity $\downarrow$ | HV $\uparrow$ | Uniform $\uparrow$| Soft Uniform $\uparrow$| IGD $\downarrow$| FD $\downarrow$|
|--|--|--|--|--|--|--|--|--|
| ZDT1 | FD | **0.001** | **0.027** | 1.054 | **0.162** | **0.017**| 0.040| **0.082** |
| | HV | 0.018 | 0.027 | **1.054** | 0.148 | 0.014 | **0.040**| 0.091|
| ZDT2 | FD| **0.001** | **0.027** | 0.725 | **0.162** | **0.018** | **0.041** |**0.081**|
| | HV | 0.039 | 0.028| **0.727** | 0.145 | 0.008 | 0.046 | 0.136 |
| RE21 | FD | **0.001** | **0.016** | 1.252 | **0.125** | **-0.020** | **0.032**|**0.064**|
| | HV | 0.026 | 0.017| **1.253** | 0.099|-0.028|0.033| 0.090|
| RE22 | FD| **0.002** | **0.017** |1.190| **0.125** | **-0.019** | **0.033** | **0.066** |
| | HV | 0.010| 0.017| **1.192** | 0.107|-0.020| 0.033| 0.072|
**W2. How does FD evaluate uniformity?**
1. Uniformity can be measured by the discrepancy between the empirical distribution of representative points and the uniform distribution over $\mathcal{T}$ (Fang et al. 2000). FD can be rewritten exactly as the Wasserstein(W)-$\infty$ distance between a discrete distribution (induced from the input designs) and uniform distribution $U(\mathcal{T})$. **Optimizing the FD is equivalent to minimizing the W-$\infty$ distance from a design to $ U(\mathcal{T}) $**. Moreover, Minimizing FD is also called to produce space-filling designs (Pronzato et al, 2016), which is a kind of uniform distribution.
2. For the proposed rate-optimal design approaching minimal FD, Thm. 3/4 demonstrate that the design will asymptotically converge to $ U(\mathcal{T}) $ and yield an equal spacing design, representing uniformity (see Fig. 1, Line 123).
**W3. More complex problem instances besides ZDT and DTLZ in standard benchmark?**
Besides ZDT and DTLZ, we experimented with real-world multiobjective test suites (**RE** problems, Tanabe et al., 2020) and complex multiobjective machine learning problems. The reason we consider more RE problems is that Tanbabe el al. claimed that "they (current synthetic problems) are likely to include unrealistic properties which may lead to overestimation/underestimation". Additionally, we conduct new results on synthetic **UF** problems (Li et al., 2008) and **Fi** problems (Lin et al., 2022) (cf one-page pdf).
**Q1. Sensitivity to the preferences initialization?**
We do the following ablation studies (cf one page pdf).
- **Initial preferences span from $[0, 1]$ to $[0.5, 0.5]$**. UMOD then expands this to the full preference space, finding Pareto solutions up to $[0.9, 0.1]$.
- **Non-uniform preferences (truncated Gaussian)**. UMOD still achieves uniform Pareto objectives, demonstrating robustness to non-uniform initialization.
**Q2. What are the algorithm settings used in the experiments, such as number of fitness evaluations and independent runs?**
The experiment involves **31** independent runs for MOEA and **5** for multiobjective machine learning tasks. The maximum fitness evaluations are **32,000** for bi-objective, **126,000** for three-objective, and **350,000** for four-objective optimization (except for DTLZ5,6 since they are degenerated). Detailed hyper-parameters are in **Table 5**.
**Q3. Statistical tests?**
Algorithms are ran for **31** times independently to obtain robust experimental results. We conducted the **Wilcoxon rank-sum test** at a significance level of 5%. The p value (%) against others are:
**Table 2. p-value of Wilcoxon rank**
| | SMS | NSGA3 | MOEA/D | AWA |
|--|--|--|--|--|
| IGD | 1.76 | 3.92 | 0.34 | 0.01 |
| Spacing | 0.02 | 0.01 | 0.06 | 0.01 |
| Sparsity | 0.2 | 0.01 | 0.01 | 0.07 |
| Uniform | 0.02 | 0.01 | 0.01 | 0.01 |
| SUniform |0.07| 0.07 | 0.07 | 0.07 |
| MaxGD |1.15| 3.38| 2.9 | 0.67 |
which indicates UMOD is significant.
**L1. Sensitivity analysis concerning different values of $K$.**
We investigate **K=8, 12, 20, and 30** (cf attached PDF), showing UMOD can handle **a wide range of K values**.
-----------
Reference
Uniform Design: Theory and applications. Fang et al. Technometrics. 2000.
Minimax and maximin space-filling designs: some properties and methods for construction. Pronzato et al. Journal de la Société Française de Statistique. 2017.
An Easy-to-use Real-world Multi-objective Optimization Problem Suite. Tanabe et al. Applied Soft Computing. 2020.
Multiobjective optimization problems with complicated Pareto sets. Li et al. TEVC. 2008.
Pareto Set Learning for Expensive Multi-Objective Optimization. Lin et al. NeurIPS 2022.
---
Rebuttal 2:
Title: Kindly requiring for further comments
Comment: Dear reviewer,
We are more than willing to engage with you further to provide additional results to address any of your remaining concerns. Thanks for your effort on our work!
Best regards, Paper 5680 Authors
---
Rebuttal 3:
Title: Kindly ask for feed-backs
Comment: Dear Reviewer AxUm,
We sincerely thank you for your time and effort in reviewing our work, especially given your busy schedule. As the discussion phase wraps up, we kindly ask for your feedback on our responses. We hope to confirm that we've addressed your concerns and are open to any further questions or discussions.
If our answers meet your expectations, we would appreciate it if you could consider adjusting your score and confidence. We look forward to any further dialogue.
Thank you for your consideration.
Best regards, Paper 5680 Authors
---
Rebuttal 4:
Title: Further experiment results and discussions (Sensitivity of different $K$, FE, independent runs)
Comment: Dear Reviewer,
We have new results that directly address your concerns regarding to the setting of different $K$'s, which we summarize below for your convenience.
> **Question: Sensitivity analysis for different values of $K$?**
Thank you for raising this important issue. We agree that analyzing different values of $K$ is crucial. In our main paper, we focused on a more challenging scenario using small $K$ values ($K$=8 for bi-objective and $K$=21 for three-objective problems) because:
- Approximating the Pareto front with a small number of solutions is more difficult, while larger values of $K$ make this easier.
- For multiobjective evolutionary algorithms (MOEAs), handling smaller $K$ values is more challenging due to the complexities in the MOEA/D framework. However, we inventively do not want to delve into technical details in the main paper, as UMOD is a general framework applicable to both evolutionary and gradient-based MOO.
- Fitting a neural network with fewer solutions is challenging, but we demonstrated that even with small $K$, the Pareto front model fits well, highlighting the benefits of using neural networks in this context.
- Theoretically, as per Theorem 4 in the main paper, maximizing the minimum pairwise distances ensures that as $K$ increases, the distribution converges asymptotically to a uniform distribution over the Pareto front.
Extending UMOD to a larger number of solutions is therefore easier. We present two new results:
- The one-page PDF shows promising visual results for different values of $K$.
- In Table 6 of our response to N5xS, we demonstrate that using larger $K$ values (K=50 for bi-objective and K=91 for three-objective problems) yields even better results, outperforming classical methods in most uniformity indicators. For example, the filling distance (covering radius) is only half that of MOEA/D, indicating a more precise coverage of the Pareto front.
We plan to include these results and comparisons with advanced methods like LSSA (Wang et al. 2023) in the revised paper.
**Question: Hyperparameters (FE, Seed Number) Used in MOEA?**
In addition to the previously mentioned hyperparameters, the study explores the impact of varying FE (Function Evaluations). Detailed results can be found in Table 5 under "Results Comparison with Different FE on Three-Objective Problems." The sections "FE for Many-Solution Problems" and "Analysis of Different FEs" provide further insights, along with our response to N5xS.
------
Reference:
- Wang et al., "Enhancing Diversity by Local Subset Selection in Evolutionary Multiobjective Optimization," IEEE TEC, 2023. | Rebuttal 1:
Rebuttal: We sincerely appreciate all helpful feedback and comments from the reviewers. In this part, we first address some general comments raised by the reviewers.
**Q1 (Asked by 5qjZ and N5xS). Relationship between the convergence of IGD and the uniformity of Fill Distance (FD). The advantages of two indicators?**
We first illustrate the relation between Inverted Generational Distance (IGD) and Fill Distance (FD); and then explain the reasons why we prefer FD over IGD.
We (informally) recall the definitions:
$$\mathrm{IGD}(\mathbb{Y}) = \mathbb{E_{\mathbf{y} \sim U(\mathcal{T})}} \min_{\mathbf{y}' \in \mathbb{Y}} \rho(\mathbf{y}, \mathbf{y}'), \mathrm{FD}(\mathbb{Y}) = \max_{y \in \mathcal{T}} \min_{\mathbf{y}' \in \mathbb{Y}} \rho(\mathbf{y}, \mathbf{y}'),$$
where $\mathcal{T}$ denotes the Pareto front, and $\rho$ is the Euclidean distance.
Given a set $\mathbb{Y}$, we denote $d(\mathbf{y}; \mathbb{Y}) = \min_{\mathbf{y}' \in \mathbb{Y}} \rho(\mathbf{y}, \mathbf{y}')$ as the distance function between $\mathbf{y}$ and $\mathbb{Y}$.
We can then rewrite $\mathrm{IGD}(\mathbb{Y}) = \\|d(\cdot,\mathbb{Y})\\|\_{1, U(\mathcal{T})}$, the $L_1$ norm of distance functional $d(\cdot,\mathbb{Y})$ under the measure $U(\mathcal{T})$, and $\mathrm{FD}(\mathbb{Y}) = \\|d(\cdot,\mathbb{Y})\\|\_{\infty, U(\mathcal{T})}$, the $L_\infty$ norm of the distance functional. By the classic property that $L_\infty$ norm dominates $L_1$ norm, we can immediately conclude that $\mathrm{IGD}(\mathbb{Y}) \leq \mathrm{FD}(\mathbb{Y})$ and thus FD is a stronger metric than IGD. More specifically, a design with tightly controlled FD must imply small IGD, while the reverse is not always true.
(Reviewer N5xS) With the observation above, it is thus improper to claim **minimizing FD is generally equivalent to minimizing IGD**.
FD is a stronger metric, and in many fields, the design of an algorithm to provably attain small $L_\infty$ norm is harder than $L_1$ norm.
In addition, we argue **all** solutions essentially contribute to FD. FD, as the $L_\infty$ norm of the distance functional, is naturally the uniform upper bound over **all** solutions, and we recall convergence in $L_\infty$ norm is equivalent to uniform convergence.
Indeed, any outlier point $y$ away from the design $\mathbb{Y}$ will be captured by FD, while not by IGD.
We finally summarize the reasons why we choose FD as our motivated objective function:
- The attractive theoretical properties mentioned above.
- FD can be geometrically understood as covering radius.
- FD induces a tractable surrogate (Thm. 1). We note, both IGD and FD, cannot be directly optimized without knowing the true PF.
Unlike our proposed surrogate, the practical computation of IGD relies on the empirical average over the approximation to the Pareto front, while the quality of the approximation is usually unguaranteed.
**Q2 (Asked by JBGb, N5xS). Results on DTLZ 5-7?**
1. UMOD performs **exceptionally well** on DTLZ5/6, as shown in Figure 3 of the one-page PDF. This is because DTLZ5/6 has a degenerated Pareto front, meaning most preference vectors correspond to duplicate Pareto objectives.
- This property makes calculating specific preference vectors corresponding to uniform Pareto objectives more effective (by encouraging to produce different solutions).
- In contrast, using fixed preference vectors results in many duplicate Pareto solutions, which is inefficient. While MOEA/D-AWA reduces duplicated solutions through heuristic methods, UMOD significantly outperforms it.
- Additionally, UMOD achieves a much smaller covering radius compared to NSGA3 and SMS-MOEA. Although SMS-MOEA is competitive with UMOD on some uniformity indicators, it take more time (around 2x) compared with UMOD.
Visual results for DTLZ5 and DTLZ6 are presented in the **one-page PDF**.
**Table 1:** Result on DTLZ5 (15 solutions), max func. evaluation 90,000.
| Method | Spacing$\downarrow$ | Sparsity$\downarrow$ | HV$\uparrow$ | IGD$\downarrow$ | FD$\downarrow$ | Runtime (min.)|
|--------|---------|----------|-------|-------|-------|-------|
| UMOD | **0.0197** | **0.0131** | 0.6999 | **0.0291** | **0.076** |10.34|
| AWA | 0.0495 | 0.0282 | 0.6783 | 0.0639 | 0.1513 |10.01|
| MOEAD | 0.0871 | 0.0645 | 0.6352 | 0.1430 | 0.3002 | 9.85|
| NSGA3 | 0.0599 | 0.0289 | 0.6623 | 0.0683 | 0.1802 | 0.61|
| SMS | 0.0454 | 0.0147 | **0.7035** | 0.0307 | 0.1099 | 16.99|
**Table 2:** Result on DTLZ6 (15 solutions), max func. evaluation 90,000.
| Method | Spacing | Sparsity | HV | IGD | FD | Runtime (min.)|
|--------|-----------|----------|--------|-------|---------|-----|
| UMOD | **0.0315**| **0.0136**| 0.7008 | 0.0505| **0.0941** | 11.56|
| AWA | 0.0904 | 0.0313 | 0.6697 | 0.0715| 0.1697 | 10.34|
| MOEAD | 0.0843 | 0.0599 | 0.6352 | 0.143 | 0.3002 | 9.34|
| NSGA3 | 0.0524 | 0.0415 | 0.6623 | 0.0931| 0.2972 | 0.73|
| SMS | 0.0426 | 0.0143 | **0.7036** | **0.0305** | 0.1099 | 21.23|
Further comparisons with more recent methods will be included in the revised paper.
2. For the current version of UMOD, it is not suitable to solve DTLZ7 since DTLZ7 has a disconnected Pareto front. Theorem 3 and Theorem 5 assume that the Pareto front is compact and connected to build a uniform distribution of Pareto objectives, but DTLZ7 is disconnected. To handle disconnected Pareto fronts, one approach is to run UMOD on each segment separately. We will explore this in future work.
Pdf: /pdf/e99c39f1cd5375a197e435e3def5f282f436c315.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Nearly Optimal Algorithms for Contextual Dueling Bandits from Adversarial Feedback | Reject | Summary: The work studies linear contextual dueling bandits with adversarial feedback. In each round $t$ the agent observes a context $x_t$ and chooses two actions $(a_t,b_t)$. The environment generates a binary preference label $\ell_t = \mathbb{I}(a_t > b_t)$. The underlying assumption is that there exists a linear reward function $r(x,a) = \theta_{\star}^{\top}\phi(x,a)$, where $\theta_*$ is a latent $d$-dimensional vector and $\phi$ is a known feature map such that $\|\|\theta_*\|\|_2 \le B$ and $\|\|\phi\|\|_2 \le 1$.
Based on this, the preference $\ell_t$ is a random variable such that
$$
\mathbb{P}\big(a > b \mid x\big) = \sigma\big(r(x,a)-r(x,b)\big)
$$
The link function $\sigma$ is antisymmetric, $\sigma(-z) = 1-\sigma(z)$ and such that $\sigma' \ge \kappa > 0$.
It is further assumed that a nonoblivious adversary may occasionally flip the preference label with the knowlege of $(a_t,b_t)$, where $C_T$ indicates the number of flips in the first $T$ rounds.
The regret is measured according to the following formula
$$
\max_a \sum_t 2r(x_t,a) - \sum_t \Big( r(x_t,a_t) + r(x_t,a_t) \Big)
$$
The main result is a regret bound of the order $d\sqrt{T} + dC$, ignoring logarithmic factors. This bound is tight because $\Omega(d\sqrt{T})$ is the known lower bound without adversarial corruption and $\Omega(dC)$ is shown to be the regret due to the adversarial corruption.
Experiments complete the contribution.
Strengths: The topic is of adversarial feedback in dueling bandits is interesting and was only studied in a $K$-armed setting.
The regret bound is tight up to log factors
The related work section is complete and accurate.
The experimental section is significant.
Weaknesses: The main results (upper and lower bounds) appear to be mostly based on combinations of known techniques from previous papers. The originality of the technical contributions is unclear.
The analysis of the $C$ unknown case (Section 5.2) is trivial.
Assumption 3.2 could create bad dependencies in the bounds on $B$ (and $\|\|\theta_{*}\|\|_2$).
The conditions $\sigma' \le 1$ and $\phi_t^{\top}\theta_{*} \le 1$ in the paragraph before (4.3) are not explicitely stated in the assumptions. Moreover, the second condition seem to imply that $B \le 1$.
There is no discussion on the hardness of computing $(a_t,b_t)$.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Can the authors point out what where the main technical challenges in the proof of Theorem 5.3?
* Please compute explicit values for $\kappa$ in terms of $B$ and $\|\|\theta_{*}\|\|_2$ for concrete choices of $\sigma$.
* Please elaborate on the hardness of computing $(a_t,b_t)$.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: There is an explicit limiation paragraph in the conclusions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and suggestions! We answer your questions as follows.
---
**Q1**: Technical novelties
**A1**: We want to emphasize that we study the dueling bandit problem, which is different from the standard linear bandit problem and incurs several challenges when using uncertainty-based weight in dueling bandits. First, the feedback is binary, given by a preference model $\mathbb{P}(a \succ b| x) = \sigma(r^*(x,a)-r^*(x,b)).$ This difference in settings leads to a different analysis.
Unlike the weighted regression method, our model employs weighted maximum likelihood estimation (MLE) to estimate the underlying parameter $\theta$. The nonlinearity of the function prevents us from obtaining a closed-form solution for $ \theta$, adding difficulty in determining the confidence radius. Moreover, our analysis of weights for the nonlinear model in Section 4 is novel. We explain how our weight selection can cancel out the variance, resulting in a more robust theoretical analysis under adversarial feedback.
In our proof, we bypass the difficulty of nonlinearity by utilizing an auxiliary vector function $G_{t}({\theta}) = \lambda\kappa{\theta} + \sum_{i = 1}^{t-1}w_i\Big[\sigma\big(({\phi}(x_i,a_i)-{\phi}(x_i,b_i))^\top {\theta}\big) -\sigma\big(({\phi}(x_i,a_i)-{\phi}(x_i,b_i))^\top {\theta}^*\big)\Big]\big({\phi}(x_i,a_i)-{\phi}(x_i,b_i)\big)$. The elliptical potential lemma provides an upper bound of $\||G_ {t}({\theta_t}) \|| _ {\Sigma_t^{-1}} $. We bridge the gap between this and the confidence radius $\|| \theta_t-\theta^*\||_{\Sigma_t}$ by mean value theorem (Lines 525 to 527)
To make it clearer, we have provided a roadmap of proof in Appendix A.
---
**Q2**: Analysis of unknown $C$ is trivial
**A2**: While the analysis is simple, it does not diminish the significance of our result that our algorithm is able to achieve optimal regret even when faced with an unknown $C$ (see Remark 5.8). This is also included for the completeness of our work.
---
**Q3**: Conditions are not explicitly stated in the assumptions
**A3**: In Section 4, our argument aims to explain the motivation for introducing uncertainty-based weights. We want to clarify that we provide rigorous proof for the algorithm's performance in Appendix B.1, which relies solely on our specific choice of weights and does not depend on the conditions or analysis from Lines 207 to 210.
In the motivating analysis, we use $\sigma' \le 1$ since for many useful link functions (e.g., sigmoid function), the inequality does hold, In addition, $\phi_t^\top \theta \le 1$ is a typo and we only need $\sigma'( \phi_t^\top \theta^*) \le 1$, which always holds for the sigmoid link function. In our revision, we will correct the typo and make the writing clearer.
---
**Q4**: Assumption 3.2 could create bad dependencies in the bounds on $B$
**A4**: We can calculate $\kappa$ in Assumption 3.2 using $B$. Take $\sigma(x) = 1/ (1+e^{-x})$ for example. $\sigma(x)’ = e^{-x} /(1+e^{-x})^2$. Due to Assumption 3.1, $|(\phi(x,a)-\phi(x,b))^\top\theta| \le 2B$, therefore, $\sigma’((\phi(x,a)-\phi(x,b))^\top\theta) \le 1/(e^{-2B} + 2 + e^{2B})$ for all $a,b \in \mathcal A, \theta $. As a result, $\kappa = 1/(e^{-2B} + 2 + e^{2B})$. Our regret’s dependency on $B$ will be $O(e^{2B} + e^{-2B})$. Usually, $B$ is considered as a constant, , and thus it will not cause bad effects to our regret. Similar dependency on $B$ is widely seen in the literature, for example [1][2][3].
---
**Q4**: Hardness of computing the arms
**A4**: In this work, we assume there is a computation oracle to solve the optimization problems over the action set $\mathcal{A}$ (e.g., Line 6 of Algorithm 1). A similar oracle is implicitly assumed in almost all existing works for solving standard linear bandit problems with infinite arms (e.g., OFUL and CW-OFUL algorithms). Without such an oracle, choosing a pair of actions from the infinite decision set would be computationally intractable.
In the special case where the decision set is finite, we can iterate across all actions, resulting in $O(k^2d^2)$ complexity for each iteration, where $k$ is the number of actions, and $d$ is the feature dimension.
In our revision, we will add more discussion on the computational complexity.
---
[1] Zhu et al. Principled Reinforcement Learning with Human Feedback from Pairwise or K-wise Comparisons ICML
[2] Xiong et al. Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-constraint ICML
[3] Li et al. Feel-Good Thompson Sampling for Contextual Dueling Bandits ICML
---
Rebuttal Comment 1.1:
Comment: Thank you for your answers. I found them useful. I have still some reservation concerning novelty though, hence I will raise my score by one point.
---
Reply to Comment 1.1.1:
Comment: We're glad that we were able to address your questions. Thank you for raising the score. To the best of our knowledge, our work is the first to achieve nearly minimax optimal regret for dueling bandits in the presence of **adversarial preference feedback**, regardless of whether the amount of adversarial feedback is known. We will highlight this in the final version. | Summary: This paper investigated the contextual dueling bandits with adversarial feedback, where the adversary can corrupt the binary feedback of the agent to a certain level. A new algorithm named RCDB has been proposed. The key idea lies in the utilization of uncertainty-weight MLE. Regret analysis of RCDB was provided along with some experimental evaluations.
Strengths: - This paper studies a known problem but with new angle, i.e., the adversary can corrupt the binary feedback of the agent to a certain level. The problem is well motivated.
- A novel algorithm named RCDB was designed and incorporated uncertainty-dependent weighting into the MLE.
- The theoretical performance of RCDB in terms of regret is provided.
- Experimental results to validate the performance of RCDB is presented and compared to existing methods.
Weaknesses: - Assumption 3.1 assumes a linear reward. The reviewer agrees that this is a "widely-used" assumption in the recent RLHF literature, but was curious if your framework and regret analysis can be extended without such an assumption? If not, what are the new challenges? Can you comment on this?
- The construction of the parameter estimator of $\theta$ requires the Taylor expansion. How the $\approx$s in Section 4 impact the regret analysis?
- The experiments were run for multiple times, however, the variance is not shown in Figure 1.
- The paper is very dense, and the authors have changed the template a bit, e.g., the space has been largely squeezed throughout the paper.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Assumption 3.1 assumes a linear reward. The reviewer agrees that this is a "widely-used" assumption in the recent RLHF literature, but was curious if your framework and regret analysis can be extended without such an assumption? If not, what are the new challenges? Can you comment on this? The reviewer noticed that the authors discussed this at the end of the paper and mentioned Li et al. 2024. Will such an extension be straightforward?
- The construction of the parameter estimator of $\theta$ requires the Taylor expansion. How the $\approx$s in Section 4 impact the regret analysis?
- The experiments were run for multiple times, however, the variance is not shown in Figure 1.
- The paper is very dense, and the authors have changed the template a bit, e.g., the space has been largely squeezed throughout the paper.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback! We answer your questions as follows.
---
**Q1**: Challenges to extend the linear reward to more general settings.
**A1**: It might be possible to consider the corruption problem for a nonlinear reward function class with finite Eluder dimension, like in [1]. However, it is worth noticing that the observed comparison feedback depends on the difference in the reward gap $r(x,a)-r(x,b)$, however, the regret/sub-optimality of the selected action is based on the summation of two reward function $r(x,a)+r(x,b)$. For the linear reward function, both the reward gap $r(x,a)-r(x,b)$ and reward summation $r(x,a)+r(x,b)$ fall into a linear region. However, for a general setting, the gap and summation may share different function classes, which makes it difficult to analyze. This is beyond the scope of our work and we leave it as a future work.
---
**Q2**: Will Taylor's expansion impact the regret analysis?
**A2**:
In Section 4, we aim to provide a clear explanation of the chosen weights. Therefore, we apply Taylor's expansion to clarify its relation to the variance and use approximations to illustrate the motivation, making it easier to understand. We want to clarify that we provide rigorous proof for the algorithm's performance in Appendix B.1, which relies solely on our specific choice of weights and does not need to use Taylor’s expansion.
---
**Q3**: variance for the experiments
**A3**: Thank you for your advice. In the uploaded pdf, we have included the variance information for the experiments in the plot.
---
**Q4**: Paper is too dense
**A4**: Thanks for your suggestion. We will restructure the formulas to create more space and improve readability in our revision, given that there is one extra page for camera-ready.
---
[1] Ye et al.(2023) Corruption-robust algorithms with uncertainty weighting for nonlinear contextual bandits and Markov decision processes.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: Thanks for the clarification. I will keep the current score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your support! | Summary: The author proposed an algorithm, coined robust contextual dueling bandits (RCDB) for advarial feedback, using uncertainty-weighted maximum likelihood estimation. The algorithm guarantees $\widetilde{O}(d\sqrt{T}+CT)$.
Strengths: 1. Their algorithm is not limited to a finite number of arms.
2. Their algorithm considers adversarial attacks based on selected actions (although the maximum number of adversarial attacks is restricted).
Weaknesses: 1. Hard to follow the paper
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. $\Sigma_t$ must be the Gram matrix in line 209, $\Sigma_t$ appeared without introducing.
2. Please specify exactly where in the supplementary materials proofs of these results (Lemma 5.1 ~ Theorem 5.7) are located.
3. It seems to specify the initial weight $w_t$ in Algorithm 1. $w_t$ is specified in Line 8 in Algorithm 1, but it is used from line 3.
4. Further explanation is needed for lines 204-209. For instance, clarify how the property of $\mathcal{F}_t$-measurability is used in the conditional expectation in line 207. Additionally, explain why the assumption $|\theta_t-\theta^*|<<1$ is made to consider Taylor's expansion.
5. It would be better to clearly state that the proposed algorithm is effective under both known and unknown numbers of adversarial attacks. Prior to Section 5, it is unclear whether this paper considers both aspects or only one of them.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. The authors precisely pointed out the limitations of their results, such as the reward function being linear with a known feature map.
2. Additionally, choosing $a_t, b_t$ (by computing argmax) might be infeasible when interaction with the environment needs to be fast.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback! We answer your questions as follows.
----
**Q1**: Hard to follow the paper.
**A1**:
Thank you for pointing out these issues. We will address your concerns one by one:
(1) $\Sigma_t$ appeared without introducing.
Our definition of $\Sigma_t$ is provided in line 3 of Algorithm 1. In our revision, we will add a reference to this definition when we use it to avoid confusion.
(2) specify exactly where in the supplementary materials proofs of these results (Lemma 5.1 ~ Theorem 5.7)
Due to page limits, we do not provide links in the main paper. However, the subtitles in the appendix clearly indicate where the results are proved. To be more specific, the proofs for the main theorems are in Appendix B, while other technical lemmas are placed in Appendix C. We will be sure to specify it in the revision.
(3) The initial weight $w_t$ is not specified before use
It is important to note that for each round $t$, the covariance matrix $\Sigma_t$ in Algorithm 1 (Line 3) only depends on the previous weights $w_1,...,w_{t-1}$. For the initial round where $t=1$, the covariance matrix $\Sigma_t$ is initialized as $\Sigma_1=\lambda \boldsymbol{I}$ and is not related to $w_0$. For subsequent rounds, the weights $w_1,...,w_{t-1}$ have already been calculated in the previous rounds.
(4) Further explanation is needed for lines 204-209. For instance, clarify the property of $\mathcal F_t$-measurability, and explain Taylor's expansion.
In Section 4, we introduce Taylor's expansion to explain the motivation for introducing uncertainty-based weights. We want to clarify that we provide rigorous proof for the algorithm's performance in Appendix B.1, which relies solely on our specific choice of weights and does not need to use Taylor’s expansion.
Intuitively, even without the weighting, a similar analysis to Lemma 5.1 can show that the estimated parameter $\theta_t$ will approach $ \theta^*$ with a larger confidence radius, e.g., $||\theta_t - \theta^*||_{\Sigma_t} \leq \tilde{O}(\sqrt{d} C)$. Under this situation, we almost have $| \phi_t^\top \theta - \phi_t^\top \theta^*| \leq \tilde O(\sqrt{d} C) \cdot \||\phi_t|| _ {\Sigma^{-1}} \ll 1$ for a large round $t$, which encourages us to use Taylor's expansion.
For the property of $\mathcal F_t$-measurability, we aim to use the following property of conditional expectation with respect to a sub-$\sigma$-algebra.
Lemma: (Pulling out known factors): If $X$ is $\mathcal {H}$ measurable, then $ E[XY|\mathcal {H}] = XE[Y|\mathcal {H}]$. By taking $Y=1$, we have $ E[X|\mathcal {H}] = X$. Therefore, when calculating $\sigma(\phi_t^\top \theta_t) - \mathbb E[\sigma(\phi_t^\top \theta_t)| \mathcal F_t]$, the $\mathcal F_t$ measurable part will be canceled out. In our revision, we will add more explanation for our analysis.
(5) Prior to Section 5, it is unclear whether this paper considers both aspects or only one of them.
Thank you for your advice. Our paper considers both known and unknown number of adversarial feedbacks. We will add more discussion to clarify this point in our revision.
In our revision, we will modify our paper to address these issues and make the content easier to understand.
---
**Q2**:Calculating the argmax might be infeasible
**A2**:
In this work, we assume there is a computation oracle to solve the optimization problems over the action set $\mathcal{A}$ (e.g., Line 6 of Algorithm 1). A similar oracle is implicitly assumed in almost all existing works for solving standard linear bandit problems with infinite arms (e.g., OFUL and CW-OFUL algorithms). Without such an oracle, choosing a pair of actions from the infinite decision set would be computationally intractable.
In the special case where the decision set is finite, we can iterate across all actions, resulting in $O(k^2d^2)$ complexity for each iteration, where $k$ is the number of actions, and $d$ is the feature dimension.
In our revision, we will add more discussion about the calculation complexity.
---
Rebuttal Comment 1.1:
Comment: Thank you for your kind explanation.
While there are lines where mathematical notation appears abruptly (e.g., the Gram matrix on line 209), these are minor issues. If the author provides additional guidance to help readers follow more easily, I believe the paper is overall good. I would therefore raise the rating from 5 to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising your rating! We will be sure to improve the presentation according to your suggestion. | Summary: This paper studies the Contextual Dueling Bandits from Adversarial Feedback problem, in a linear reward setting. The authors propose an algorithm named robust contextual dueling bandits (RCDB), which is designed based on uncertainty-weighted regression and MLE. The authors prove that the proposed algorithm achieves a nearly optimal regret upper bound that matches the lower bound both in scenarios with and without (C=0) adversarial feedback. Experimental evaluations are provided to validate the theoretical results.
Strengths: 1. The problem is well-motivated and important.
2. The paper is well-written.
3. The authors prove a nearly optimal regret upper bound for the proposed algorithm that matches the lower bound with and without (C=0) adversarial feedback.
4. The authors also conduct some experiments to validate the theoretical results.
Weaknesses: 1. The title may be a little bit misleading, I think the setting of this paper is the adversarial corruption setting, not the setting with completely adversarial feedback. And the setting is the linear reward model, which is not specified in the title.
2. I have not checked the details, but I feel the uncertainty-weighted technique (which is the key to dealing with the corruption in the linear bandits) is mostly based on the previous works, could the authors highlight the technical difficulties in the dueling bandit setting?
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weakness above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback. We will address your questions one by one.
---
**Q1**: The title may be a little bit misleading, I think the setting of this paper is the adversarial corruption setting, not the setting with completely adversarial feedback. And the setting is the linear reward model, which is not specified in the title.
**A1**: In previous studies on standard linear bandit problems, including both adversarial corruption and adversarial bandits, the adversary directly targets the reward. In contrast, our setting involves binary preference feedback rather than direct reward feedback, with the adversary attacking by flipping the preference labels. We use “adversarial feedback” to differentiate our work from prior studies on corrupted or adversarial reward settings, for example [1][2]. Indeed, our setting focuses on the linear reward model, and we will specify this in the title in the revision. Thank you for your suggestion.
---
**Q2**: The uncertainty-weighted technique is based on previous works. What are the technical difficulties?
**A2**: We want to emphasize that we study the dueling bandit problem, which differs from the standard linear bandit problem in [3] and incurs several challenges when using uncertainty-based weights in the dueling bandit context. Specifically, in the dueling bandit setting, the feedback is binary and given by a preference model $\mathbb{P}(a \succ b| x) = \sigma(r^*(x,a)-r^*(x,b))$. This difference in feedback necessitates a different analytical approach.
Unlike the weighted regression method, our model employs weighted maximum likelihood estimation (MLE) to estimate the underlying parameter $\theta$. The nonlinearity of the function stops us from having a closed-form solution of $\theta$, making it difficult to determine the confidence radius. Additionally, in Section 4, we discuss the weight selection, explaining how our uncertainty-based weights can cancel out the variance of the estimated preference probability.
To overcome the nonlinearity challenge in our proof, we utilize an auxiliary vector function $G_{t}({\theta}) = \lambda\kappa{\theta} + \sum_{i = 1}^{t-1}w_i\Big[\sigma\big(({\phi}(x_i,a_i)-{\phi}(x_i,b_i))^\top {\theta}\big) -\sigma\big(({\phi}(x_i,a_i)-{\phi}(x_i,b_i))^\top {\theta}^*\big)\Big]\big({\phi}(x_i,a_i)-{\phi}(x_i,b_i)\big)$. Then, the elliptical potential lemma provides an upper bound of $\||G_{t}({\theta_t}) || _ {\Sigma_t^{-1}}$. We bridge the gap between this and the confidence radius $\||\theta-\theta^*\||_{\Sigma_t}$ by mean value theorem (Lines 525 to 527).
To make it clearer, we have provided a roadmap of proof in Appendix A.
---
[1] Gajane et al., A Relative Exponential Weighing Algorithm for Adversarial Utility-based Dueling Bandits. ICML
[2] Saha et al., Versatile dueling bandits: Best-of-both world analyses for learning from relative preferences. ICML
[3] He et al., Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial Corruptions. NEURIPS
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply. I keep my score. Good luck!
---
Reply to Comment 1.1.1:
Comment: Thank you for your continued support! | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Personalized Federated Learning via Feature Distribution Adaptation | Accept (poster) | Summary: Federated Learning (FL) combines data from multiple clients to train a global model but struggles with heterogeneous data. Personalized Federated Learning (PFL) creates individual models for each client, addressing this issue. Traditional methods face challenges with bias-variance trade-offs and rely on limited local data or costly techniques. This paper proposes pFedFDA, an algorithm that frames global representation learning as a generative modeling task. It adapts global generative classifiers to local feature distributions, improving performance in data-scarce settings.
Strengths: 1. To implement personalized federated learing from generative view looks interesting and promising.
2. This paper is well-structured entirely, but the notations could be improved.
3. The mathematical proof looks good and sufficient.
Weaknesses: 1. Your method looks similar to the traditional personalzed federated learning. just as the algorithm 1, the shared backbone is just the weighted summation of the corresponding part in all involved clients.
2. You mentioned that your shared backbone are trained in a generative way, but I can not get the core of of it. Actually, there are some other works[1] use the generative models (e.g., autoregression) as backbone to implement personalized federated learning, what is the difference between your methods and such baselines?
3. Your global distribution parameters are weighted summation of all clients, so in my opinion, the global distribution may be bias due to data size skewness. There are many ways to overcome this, such as [2], etc. do you think such methods can be adopted by you to fix this issue of data skewness?
[1] Kou, W.B., Lin, Q., Tang, M., Xu, S., Ye, R., Leng, Y., Wang, S., Chen, Z., Zhu, G. and Wu, Y.C., 2024. pFedLVM: A Large Vision Model (LVM)-Driven and Latent Feature-Based Personalized Federated Learning Framework in Autonomous Driving. arXiv preprint arXiv:2405.04146.
[2] Kou, W.B., Lin, Q., Tang, M., Wang, S., Zhu, G. and Wu, Y.C., 2024. FedRC: A Rapid-Converged Hierarchical Federated Learning Framework in Street Scene Semantic Understanding. arXiv preprint arXiv:2407.01103.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. "where Φ and H are the feasible sets of neural network and classifier parameters, respectively.", in this context, it looks better to replace "neural network" to "shared backbone", do you think so?
2. The notaions sometimes looks confused, which could be improved.
3. In table 5, it should be pFedFDA not pFedDFA, right?
4. The used dataset looks simple, could you use more complex dataset, such as cityscpaes, camvid, kitti, imagenet, etc to veriry your contributions?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: the proposed method is over-reliance on the local distribution, so it is difficult to handle following cases: 1. a new client is added into the system and its local distribution is far away the original global distribution and the original distributions of all clients; 2. the local distribution of all clients are depondent on time (i.e., dynamic)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1 (Distinction with Prior PFL Methods)** Our method falls under the personalization framework of shared representation learning with personalized classifiers (discussed in our related work, L85-107).
The distinction of our work is in our formulation of generative client classifiers. Notably, the formulation of the classification layer not only defines the final personalized model of each client, but also controls how the shared backbone parameters are trained. We further discuss the advantages of our generative classifiers in our response to W2 below.
**W2 (Clarification of Generative Approach)** In our work, the term generative refers to the construction of our client classifiers, which we obtain through Bayes' rule and an estimate of the joint distribution of latent representations and class labels, $p(z, y)$. This is in contrast to prior representation-learning-based PFL methods which learn linear classifiers in a discriminative manner, learning $p(y|z)$ directly.
The selection of client classifiers is important not only in obtaining an accurate personalized model but also in shaping how the shared backbone is trained. Using a generative classifier on each client to train the shared backbone, we can promote alignment in client features $p(z|y)$ while accounting for differences in class distributions $p(y)$.
To provide one final perspective on training the backbone with generative classifiers, by minimizing the cross-entropy error in model predictions, we are training the shared backbone to extract features according to the distribution defined by our generative model.
In your first provided reference, the backbone is trained using a input-level generative task, either autoregressive next-pixel prediction or masked pixel prediction. This technique of training large vision models is known as generative pre-training. We will edit our paper accordingly to make sure the usage of the generative terminology is clear.
**W3 (Data Skewness in Generative Modelling)** Indeed, the weighted aggregation of parameters in most FL algorithms can lead to a bias towards clients with more local data. However, this update rule is still widely used in many PFL settings (e.g., in FedRep/BABU/PAC/FedAvg to aggregate neural network params), and an unweighted aggregation may increase the influence of noisy estimates from data-scarce clients.
In our work, we address the bias of using the shared distribution estimates through a local-global interpolated estimate. This has some similarities to the second provided reference, which uses the relative Bhattacharyya distance between RGB distributions to determine aggregation weights. While a similar approach could be used to obtain our interpolation coefficient, we optimize this coefficient directly to maximize personalized model accuracy.
**Q1/Q2/Q3 (Method Presentation)** We appreciate the feedback on the presentation of our method, and we will incorporate these comments to improve the clarity of our work.
**Q4 (Benchmark Datasets)** The selected vision datasets (CIFAR10/100, EMNIST) have been adopted by many recent works [1, 2, 3] to measure the effect of data heterogeneity on PFL tasks. We have additionally leveraged a set of natural data corruptions [4] for our CIFAR benchmarks, to simulate the complexity introduced by real-world covariate shift. We include TinyImageNet as an additional reference point in a large-data setting, but we focus primarily on the more challenging scenarios involving data scarcity and covariate shift. We leave the extension of our method to semantic segmentation tasks to future work.
[1] Exploiting Shared Representations for Personalized Federated Learning (Collins et al., ICML 2021)
[2] FedBABU: Towards Enhanced Representation for Federated Image Classification (Oh et al., ICLR 2022)
[3] Personalized Federated Learning with Feature Alignment and Classifier Collaboration (Xu et al., ICLR 2023)
[4] Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations (Hendrycks et al., ICLR 2019)
**L1/L2 (Over-Reliance on Local Distribution Estimate)** While our method utilizes the local distribution estimate in generating personalized models, our personalized classiifers are based on an interpolated distribution estimate, which enables them to leverage global knowledge to reduce the variance in their local classifier. Additionally, we note that recent representation-learning based methods [1][2] only use local client data to estimate client classifiers, and based on our results in Tab. 3, we observe that our method is more robust to extreme data scarcity where the local distribution estimate is most limited.
In response to your concern on new-client generalization, we run the following experiment, which can be found in Tab. 1 of the rebuttal PDF:
- We train each method on CIFAR10 Dir(0.5) using half of the total clients throughout training. At test time, we evaluate on these clients, as well as the second half of clients not seen at training time. We additionally evaluate new-client generalization performance under covariate shift, by corrupting the images of all new clients with each of the 10 corruptions considered in our paper.
- We observe that our new-client generalization performance is superior than the baseline FedAvgFT for all settings with and without covariate shift.
Regarding the compatibility of pFedFDA to dynamic client data distributions, we think this is an interesting direction for future work, but in this paper we follow the experimental setup of our cited baselines in assuming that client data distributions static.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal and authors response my concern properly. | Summary: This work introduces pFedFDA, a novel approach to personalized federated learning that conceptualizes global representation learning as a generative modeling task. Specifically, the method involves shared representation learning, guided by a generative classifier characterized by a low-variance global probability density. Additionally, it iteratively refines global parameters and employs a local-global interpolation technique to tailor these estimates to individual client distributions. The authors validate the effectiveness of the proposed solution through experiments on benchmark datasets, demonstrating its robustness and applicability.
Strengths: The paper presents an interesting and meaningful idea, complemented by a theoretical analysis of the bound on high probability estimation errors for the interpolated mean estimate, offering valuable insights and inspiration to the community.
The writing is clear and straightforward, with well-designed figures that effectively support and clarify the presented concepts.
Weaknesses: There are several concerns that need addressing:
1.In some cases, the performance improvements appear minimal. For instance, Table 2 shows that pFedFDA is outperformed significantly by FedBABU and pFedME.
2.In Table 3, the authors employ a Dirichlet distribution (Dir(0.5)) for experiments under extreme data scarcity. It would be beneficial to lower the Dirichlet value, perhaps to 0.1, to conduct a more extensive evaluation under greater data scarcity.
3.Table 5 indicates that the runtime of pFedFDA is longer than several other methods, including FedAvg, Ditto, and FedRoD, which calls into question the efficiency of the proposed solution.
4. In Equations 3 and 5, should there be a superscript ‘c’ on Σ to clarify the notation further?
Technical Quality: 3
Clarity: 3
Questions for Authors: please check "Weaknesses". I like the idea of this work and I am looking forward to the responses from authors.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1 (Performance Comparison of pFedFDA with Prior Works)** We believe this might be a misread of our results. While the performance improvements are not as significant in Tab. 2, pFedFDA is still competitive (achieving top-2 performance in 5 of the 7 scenarios). In particular, it beats FedBABU in all the 7 scenarios and beats pFedMe in 4 of the 7 scenarios. For the other 3 scenarios, pFedMe outperforms pFedFDA by .002, .004, .004, whereas pFedFDA outperforms pFedMe in the 4 scenarios by .002, .023, .03, .074.
In the two columns in which our method is not within the top-2, CIFAR-10 Dir(0.1) and TinyImageNet Dir(0.1), we observe that the gap between all methods is smaller. This can be explained by the relative ease of the personalization tasks under those two scenarios. Notably, these scenarios have limited covariate shift between clients and the local data volumes are large enough for clients to build strong models even without collaboration.
If we consider these results alongside Tab. 1 and Tab. 3, our experiments indicate that pFedFDA is promising as a general strategy for real-world PFL settings where we may encounter challenges of covariate shift and data scarcity.
**W2 (Setup for Data-Scarce Experiments)** Thank you for your suggestion. We want to emphasize that in this experiment, the Dirichlet distribution governs only class imbalance and not data scarcity. The extreme data scarcity setup in Tab. 3 is similar to what is considered in [1], where each client is assigned exactly one mini-batch of samples. Thus, changing the parameter of the Dirichlet distribution will not change the extent of data scarcity. We will clarify this setup in the camera-ready version of the paper if accepted.
[1] Personalized Federated Learning with Feature Alignment and Classifier Collaboration (Xu et al. ICLR 2023)
**W3 (On the Runtime of pFedFDA)** We do observe that there is a common computational cost associated with interpolation-based methods, as the local training takes additional steps to update each interpolated model directly (APFL), or optimize the interpolation weight between candidate models (FedPAC and our pFedFDA).
Compared to [1], our interpolation task is more efficient, as we combine local and global models, rather than the models of each client.
To understand the overhead of pFedFDA in greater depth, we conduct an analysis of the run-time associated with each component of local training in Tab. 2 of the attached rebuttal document. Notably, our optimization of the interpolation parameters is responsible for most of the additional training time. In Tab. 3 of the attached rebuttal document, we compare the accuracy of our method when updating the interpolation parameter every $T$ rounds instead of every local update. While updating the interpolation coefficients every round results in the best performance, selecting $T=2$ or $T=3$ results in comparable average accuracy, thus this strategy may be preferable in resource-constrained settings to improve the efficiency of our method.
**W4 (On Notation Clarity in Equations 3 and 5)** Thank you very much for your careful reading and detailed comments. This superscript is not required, as all classes share the same covariance matrix in our generative model (Line 178-179). Notably, this tied covariance assumption results in a linear decision boundary, allowing us to make a direct comparison to current works without altering the model architecture. We will make an effort to clarify this point in the revised paper.
---
Rebuttal 2:
Title: Replying to the rebuttal
Comment: Thank you for the rebuttal. After reviewing the responses to all the comments, I find that my initial concerns have largely been resolved except the performance improvement. Consequently, I will maintain my original rating.
---
Rebuttal Comment 2.1:
Title: Clarification on the rating
Comment: Thanks for your response. We are glad to hear that your initial concerns have been largely resolved. In the notification email, it says "Consequently, I will maintain my original rating (Weak Accept)." Yet, in the system, it says 'Borderline accept'. Would you mind confirming your choice of rating in the system? Thank you very much! | Summary: The paper introduces a personalized Federated Learning (FL) method that adapts global generative classifiers to local feature distributions. The authors show that their method can handle complex distribution shifts for computer vision tasks.
Strengths: - The paper proposes a personalized FL method that uses a generative classifier by considering bias-variance trade-off.
- The authors conduct extensive experiments.
- The paper is well-structured and easy to understand.
Weaknesses: - Sharing the statistics of the generative classifier could increase privacy risks.
- The algorithm’s additional computational cost (in Algo.1 L8-9) could increase with larger local data. Calculating the local covariance matrix for larger data and the covariance inversion matrix for each beta value could be computationally intensive.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In Fig. 1, the colors representing client 1 and client 2 are not easily distinguishable.
- In Eq. (4), wouldn’t the cost of the covariance matrix inversion be high? Could the server calculate it once and send it to the client to reduce the client’s computational cost, even if it increases communication cost?
- In Eq. (9), should “\in min” be replaced with “= \argmax”?
- If the client’s generative classifier’s mean and variance are sent to the server, wouldn’t the privacy risk increase compared to sending only the weight of the conventional linear classifier?
- If the generative classifier is not used and only the weight parameter from the conventional linear classifier is interpolated locally, could the performance be expected to be mid-range between FedAvgFT and pFedFDA?
- In experiments like Table 2, where there is less covariate shift between clients within the federation, FedPAC performs better. Could the degree of covariate shift within the federation be determined by looking at the difference in the generative classifier’s statistics coming from the client, and then choose the appropriate FL method?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Please refer to the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1/Q4 (Privacy Concerns of Sharing Gaussian Sufficient Statistics)** We appreciate the reviewer's comments and think it is important to discuss the privacy implications in our work.
The transmission of client feature statistics to the parameter server does not immediately raise high privacy risks as these statistics are not calculated directly from client raw data but from a mapping in a latent space embedded by a neural network. In addition, clients broadcast only the interpolated estimates of the mean and variance, and the local interpolation parameter $\beta$ is not shared with the server. Thus, a potential attacker does not have direct access to the true local statistics.
On the other hand, it is difficult to make any guarantees without using explicit privacy protection techniques, such as homomorphic encryption [1] or differential privacy (DP) [2]. Both of these techniques are compatible with our method. Notably, our results in the low-sample regime (Tab. 3) indicate that pFedFDA can tolerate noisy distribution estimates, which is promising for future integration with recent techniques [3,4,5] for efficient DP estimates of the mean and covariance.
[1] Privacy-Preserving Deep Learning via Additively Homomorphic Encryption (Phong et al., IEEE Transactions on Information Forensics and Security 2018)
[2] Personalized Federated Learning With Differential Privacy (Hu et al., IEEE Internet of Things Journal 2020)
[3] Mean Estimation with User-level Privacy under Data
Heterogeneity (Cummings et al., NeurIPS 2022)
[4] Differentially Private Covariance Estimation (Amin et al., NeurIPS 2019)
[5] Differentially Private Covariance Revisited (Dong et al., NeurIPS 2022)
**W2/Q2 (Computational Cost Associated with Covariance Estimation)** The added cost of client covariance estimation is reasonable in comparison to the base computation of training a neural network. Additionally, we avoid the more expensive calculation of the inverse covariance by solving a least-squares problem instead (refer to L197). This has a reduced complexity of O($cd^2$) compared to O($d^3$) for a naive matrix inversion algorithm. In our local training, the local covariance matrix is estimated once, and the optimization of the interpolation parameter re-uses this estimate and only has to recompute the least-squares problem for each value of beta.
In Tab. 2 of the attached rebuttal document, we measure the percentage of local training time associated with the base network training (forward/backward passes), the estimation of the local mean and covariance, as well as the optimization of the interpolation coefficient. We note that estimating the mean and covariance is less than 3\% of the total runtime of each round.
From this observation, we run experiments in which the interpolation parameter is only estimated every $T$ rounds, as this is the primary overhead introduced in pFedFDA. While updating the interpolation coefficient every round results in the best performance, we observe a similar accuracy using $T=2$ or $T=3$, which would result in a substantial reduction of the pFedFDA overhead. Detailed results for this study can be found in Tab. 3 of the attached rebuttal document.
**Q1/Q3 (Figure 1 and Equation 9 Presentation)** We appreciate the feedback and will improve upon the clarity of our work by adjusting the client color palette and making the notation of Eq. (9) consistent with the rest of the paper.
**Q5 (Generative vs. Discriminative Classifier Interpolation)** Thanks for the interesting question. We agree with the conjecture that our personalized classifier should perform better than interpolated discriminative classifiers.
At a high level, a generative approach is more advantageous in low-sample settings; but both will approach a similar accuracy if the local data volume is sufficient. Importantly, this is assuming they are using the same feature extractor parameters.
However, we would like to point out that our generative modeling approach influences not only the formulation of the final personalized classifiers but also the process of global representation learning.
In pFedFDA, clients train the feature extractor to minimize the cross-entropy loss of a generative classifier using global feature statistics (refer to Section 4.2). Intuitively, this loss pulls client features towards the global feature distribution. This allows clients to benefit more from model interpolation, as there is less bias incurred through incorporating global knowledge (i.e., we can use a smaller $\beta$ in Theorem 1.).
If an alternative method using interpolated discriminative classifiers does not employ additional regularization to guide the representation learning - the performance may indeed drop below FedAvgFT on certain benchmarks. We can see a concrete example of this in the ablation study of FedPAC [6].
[6] Personalized Federated Learning with Feature Alignment and Classifier Collaboration (Xu et al. ICLR 2023)
**Q6 (Selecting a Preferred Method for Varying Non-IID Settings)** While dynamically selecting the personalization algorithm is interesting, it may be challenging in practice as the global feature extractor and estimated distributions evolve throughout training.
Without prior knowledge of client distributions, we think it makes sense to adopt pFedFDA as a generic one-size-fits-all approach, as it is robust to covariate shift, data scarcity, and is more efficient than FedPAC.
If additional resources are available, it may be reasonable to adopt a hybrid approach and interpolate the pFedFDA classifiers across all clients in the style of FedPAC after training has concluded.
---
Rebuttal Comment 1.1:
Comment: Thank you for your feedback. I have reviewed the authors’ rebuttals to all the reviews, and most of my concerns have been addressed. Therefore, I would like to change my rating to 6 (weak accept). | Summary: This paper uses Class-Conditional Gaussian Model to formulate the latent representation of the global generative part; for the other part, a personalized federated learning algorithm pFedFDA is designed via Federated Distribution Adaptation. This paper then proves a bound on the bias-variance trade-off of pFedFDA under the assumption of independent and Gaussian distributed local dataset features. And the performance of pFedFDA against dataset scarcity and heterogeneity is presented.
Strengths: 1. The introduction and related work are well-organized. The explanation on the replacement of generative global model of the embedding module is convincing.
2. This paper gives a solid proof on the analysis of bias-variance trade-off.
Weaknesses: 1. The experiment is concentrated on the CIFAR-10 and CIFAR-100. However, it is also the dataset where pFedFDA has best performance on (Tab. 2). Also, the robustness of pFedFDA against strong heterogeneity is not good. The weakness of pFedFDA against heterogeneity is still a limitation in this paper.
2. The assumption of Theorem 1 is very strong, however, this paper does not give an evaluation on the datasets in the experiment of longtailed-ness or normality. Also, the basic result of Theorem 1 is about the upper-bound of local bias, there is no matric in the experiment to compare the performance of bias-variance balance among the algorithms, especially in the scarcity experiment (Tab. 1, 3).
3. This paper does not explain the connection between bias-variance balance and robustness against data scarcity.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Can the Class-Conditional Gaussian Kernel represent the optimal Gaussian distribution?
2. Further explain the experiment Tab. 1. How can this relate to the advantage of pFedFDA on bias-variance balancing? Also, other models (i.e. APFL, Ditto) have a stable performance enduring the dataset scarcity. How can this be explained?
3. In Tab. 2, the line of FedRep, please double check the value of the last cell .145(0.4). Is that correct?
4. Do you perform normality test on the training set of agents? If not, can you give a further explanation of the feature distribution of datasets to match the assumption of Theorem 1?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: 1. The assumption of Theorem 1 is strong. It would be better to consider the skewed distribution, and even long-tailed distributions. Based on this analysis, the result in Tab. 3 could be expanded to more general few-shot or one-shot learning cases, with a modified version of pFedFDA.
2. The design of pFedFDA is still not robust to heterogeneity and is not capable of better personalization.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1 (Robustness to Data Heterogeneity)** We note that while Tab. 1 and Tab. 3 are based on CIFAR datasets, these evaluations introduce the additional challenges of client covariate shift (via natural image corruptions) and data scarcity. Our strong performance in these settings indicates that pFedFDA is robust to realistic sources of covariate shift, in addition to quantity skew and prior probability shift introduced via Dirichlet-based data partitioning.
For more discussion of Tab. 2, please refer to our response W1 to reviewer WK31.
**W2 (Gaussian Assumption)** The application of the multivariate central limit theorem to feature representations is a common step in the analysis of neural networks, e.g., [1,2] study their relationship with Gaussian Processes. Moreover, it has been observed that the distribution of features from trained neural networks is well approximated by a class-conditional Gaussian, with a Gaussian discriminant analysis classifier having comparable accuracy to the softmax classifier used for training [3]. This has led to the widespread usage of class-conditional Gaussian feature space approximations in the literature on out-of-distribution detection [3, 4, 5].
[1] Deep Neural Networks as Gaussian Processes (Lee et al., ICLR 2018)
[2] Dropout as a Bayesian Approximation (Yal et al., ICML 2016)
[3] A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks (Lee et al., NeurIPS 2018)
[4] A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection (Ren et al., NeurIPS 2019)
[5] Exploring the Limits of Out-of-Distribution Detection (Fort et al., NeurIPS 2021)
**W2/W3 (Connection of Bias-Variance Tradeoff and Data Scarcity)** Thank you very much for pointing this out. We will add further explanations on their connection in the camera-ready version if accepted. Intuitively, limited client data can lead to high variance and poor generalization in local models, encouraging collaboration with other clients for lower variance. However, collaborative estimates (e.g., FedAvg or local-global interpolation) introduce potential bias when client data is non-iid. Theorem 1 theoretically captures this intuition on the bias-variance tradeoff of interpolated estimates and their effect on generalization error, which is implicitly measured by client generalization accuracy in our experiments.
**Q1 (Questions on the Class-Conditional Gaussian Distribution)** Please refer to our response to W2 on the justification of the class-conditional Gaussian model.
**Q2 (Setup of Tab. 1)** In Tab. 1, we introduce real-world covariate shift between client distributions by corrupting the inputs of the first 50 clients with natural image alterations [6]. This simulates FL settings where clients collect data from different devices and environments, introducing input noise not present in curated benchmark datasets. For details on image corruption, see Appendix C.1.1. Comparing Tab. 1 and 2, we see this covariate shift significantly reduces the generalization performance of FL methods.
Since half of the clients' data no longer matches the CIFAR10 distribution, there's increased bias in using global knowledge (as discussed in W2).
We assess methods under varying data scarcity to evaluate their ability to navigate the bias-variance tradeoff discussed in W2.
Regarding stability in low-sample settings, our method shows no consistent variance advantage/disadvantage over APFL and Ditto, but has significantly higher average client accuracy.
[6] Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations (Hendrycks et al., ICLR 2019)
**Q3 (Correction to Tab. 2)** We thank the reviewer for their attention to detail. This was a typo and the entry .145(0.4) should instead read .145(.04).
**Q4 (Gaussian Assumption in Practice)** As discussed in W1, we use the class-conditional Gaussian assumption, common in prior literature for describing latent representation distributions in practical settings. The empirical success of our generative classifiers supports its applicability in a variety of FL scenarios. Based on these findings, we conjecture that normality testing is not a prerequisite for applying our method.
However, to provide a reference measure of Gaussianity in our experiments, we train FedAvg on CIFAR10 Dir(0.5) and perform the Henze-Zirkler normality test on client features centered by class means, and observe that 89/100 clients follow a class-conditional Gaussian distribution at significance level p=0.05.
[7] A class of invariant consistent tests for multivariate normality (Henze and Zirkler, Communications in Statistics-Theory and Methods, 1990)
**L1 (Theorem 1 and Alternative Distributions)** For an explanation of the class-conditional Gaussian model and assumptions in Theorem 1, please refer to our response W2/W3.
We recognize that other models may perform better in some settings (as discussed in Appendix A), but we defer this optimization to future work. Based on our empirical results, the class-conditional Gaussian model appears reasonable for practical use.
**L2 (Robustness to Heterogeneity and Personalization Performance)** We apologize for any confusion and will make an effort to make the robustness of our method to data heterogeneity more clear.
In this work, we evaluated our method in the presence of various types of data heterogeneity, including quantity skew and prior probability shift (via Dirichlet-partitioning and sub-sampling), as well as covariate shift (via natural image corruptions [6]). Our empirical results indicate the ability of pFedFDA to generate personalized client models that are robust to heterogeneity, notably in the more challenging settings of data scarcity and client covariate shift.
Our rebuttal includes an additional experiment (Tab. 1) which shows that pFedFDA also generalizes well to new clients, even for clients with covariate shifts not seen at training time.
---
Rebuttal Comment 1.1:
Comment: After cautious reading, I think the authors have made themselves clear on the weaknesses (and questions), and I have updated the rating. However, it is still a regret that no other datasets are used in the scarcity/corruption experiment (which is crucial for the main idea).
---
Reply to Comment 1.1.1:
Comment: We are glad that our response addressed most of your concerns, and apprecriate the increase in your score.
We value your feedback that additional benchmarks in low-sample covariate shift settings would be beneficial. Due to resource constraints, we cannot provide new results on TinyImageNet at this time, but we intend to run additional ablations for the camera-ready version if accepted.
For this discussion, we have conducted additional experiments on the DIGIT-5 dataset [8]. The DIGIT-5 dataset consists of the original MNIST samples, as well as digit characters from SVHN, USPS, MNIST-M, and synthetic datasets. We consider an FL system where each client holds data from a single source dataset, similar to recent works [9, 10]. This established multi-domain evaluation provides an additional level of covariate shift not present in the original federated-MNIST dataset.
In the table below, we show the average (std) client accuracy for selected baselines. For each FL method, we also indicate the average accuracy improvement compared to local training. In line with the results in our main text, pFedFDA has the strongest performance in data-scarce settings and remains competitive even when the local data volume becomes more sufficient.
We appreciate the feedback from this discussion and hope the included results and intended ablations will help clarify the advantages of pFedFDA in handling the bias-variance tradeoff of personalized federated learning.
| DIGIT-5 % Training Samples | 25 | 50 | 75 | 100 | Avg. Improvement |
|-------------------------|--------------------------------------|--------------------------------------|--------------------------------------|--------------------------------------|-------------------------|
| Local | $76.84(10.85)$ | $83.11(8.07)$ | $86.97(6.35)$ | $88.51(5.65)$ | - |
| FedAvg | $81.75(10.25)$ $\newline$ $(+4.91)$ | $85.09(9.24)$ $\newline$ $(+1.98)$ | $ 87.41(8.35)$ $\newline$ $(+0.44)$ | $88.19(7.92)$ $\newline$ $(+0.32)$ | $1.91$ |
| FedAvgFT | $\underline{85.61(7.17)}$ $\newline$ $(+8.77)$ | $\underline{88.72(6.39)}$ $\newline$ $(+5.61)$ | $90.75(5.50)$ $\newline$ $(+3.78)$ | $\mathbf{91.73(5.21)}$ $\newline$ $(+3.22)$ | $\underline{5.34}$ |
| Ditto | $83.85(9.13)$ $\newline$ $(+7.01)$ | $85.53(8.81)$ $\newline$ $(+2.42)$ | $87.43(8.39)$ $\newline$ $(+0.46)$ | $88.80(7.75)$ $\newline$ $(+0.29)$ | $2.54$ |
| FedPAC | $82.78(8.48)$ $\newline$ $(+5.94)$ | $87.94(7.03)$ $\newline$ $(+4.83)$ | $\mathbf{91.12(5.65)}$ $\newline$ $(+4.15)$ | $91.04(5.96)$ $\newline$ $(+2.53)$ | $4.36$ |
| pFedFDA | $\mathbf{86.54(7.80)}$ $\newline$ $(+9.70)$ | $\mathbf{90.05(5.73)}$ $\newline$ $(+6.94)$ | $\underline{90.75(5.36)}$ $\newline$ $(+3.78)$ | $\underline{91.56(5.21)}$ $\newline$ $(+3.05)$ | $\mathbf{5.86}$ |
[8] Learning to Generate Novel Domains for Domain Generalization (Zhou et al., ECCV 2020)
[9] Federated Learning from Pre-Trained Models:
A Contrastive Learning Approach (Tan et al., NeurIPS 2022)
[10] Rethinking Federated Learning with Domain Shift: A Prototype View (Huang et al., CVPR 2023) | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their detailed comments and feedback. We will revise the paper accordingly to further clarify our work and address the points brought up in these discussions.
In our attached rebuttal PDF, we have provided the following additional experimental results:
Tab. 1: Evaluation of method generalization to clients unseen at training (in response to reviewers Qz6b and BoVr).
Tab. 2: Analysis of system run-time corresponding to each component of local training (in response to reviewer 3xQF).
Tab. 3: Evaluation of pFedFDA with intermittent updates of the interpolation parameter $\beta$ (in response to reviewer 3xQF and WK31).
Pdf: /pdf/0347900de4e1dd0ee5526dd33fa3a995dc348e5b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper introduces pFedFDA, a personalized Federated Learning (pFL) method designed to address the issue of client heterogeneity in federated learning. pFedFDA combines global knowledge through server aggregation with local knowledge through client-specific training and distribution estimation, enhancing the model's personalized performance on each client. The authors propose an algorithm that efficiently generates personalized models, demonstrating significant improvements in data-scarce settings through extensive computer vision benchmarks. The paper is well-structured, and the writing is clear, making it easy to follow the authors' arguments and experimental results.
Strengths: 1.The method of decomposing model training into shared representation learning and personalized classifier training, followed by adaptation to local feature distributions, is both innovative and promising for handling non-i.i.d. data in FL environments.
2.The paper provides strong empirical evidence through comprehensive experiments on various datasets, showcasing the superiority of pFedFDA in challenging distribution shift and data scarcity scenarios.
3.The paper is well-written, with a clear structure that logically progresses from the introduction of the problem to the presentation of the methodology and results.
Weaknesses: 1. In this paper, the mean value and covariance are utilized as the characteristic distribution of the data. Have you explored other statistical measures to describe this distribution?
2. In the ablation study, the results obtained by calculating multiple β values were unexpectedly lower compared to using a single β value. Intuitively, multiple β values should provide a more comprehensive understanding of the data distribution and thus yield better results. However, the experimental findings do not support this expectation, and there is no clear explanation for this discrepancy.
3.Note that pFedFDA uses the same feature extractor for all clients but employs heterogeneous classifiers. A key feature of pFL is the heterogeneity of client models. pFedFDA is somewhat limited in this regard. Are there methods to handle heterogeneous feature extractors in pFL scenarios?
Technical Quality: 3
Clarity: 3
Questions for Authors: The main concerns and questions are listed in the weaknesses. Please provide answers to address these concerns.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1 (Choice of Statistical Measures)** We selected the mean and covariance as the class-conditional Gaussian model is uniquely defined by these parameters. While it would have been possible to communicate estimates of the inverse-covariance matrix, this adds unnecessary computation which we avoid in our implementation (refer to line 197).
If a distribution other than the class-conditional Gaussian is considered for modeling $p(z|y)$, it may be reasonable to send different sufficient statistics or estimate higher moments.
**W2 (Discussion of Beta Value Ablation)** Thanks for the interesting question. Indeed, by using separate coefficients for the means and covariance, the interpolated classifier is more flexible towards fitting the local distribution. While we optimize beta using k-fold validation, this is done over the set of training features, so there is still potential for over-fitting. We chose to not optimize over a separate validation set to make our results comparable to existing approaches which do not require additional held-out data.
**W3 (Limitations of pFedFDA Personalization)** We would first like to point out that using a shared backbone is not necessarily a limitation, but a common feature of recent representation-learning methods for PFL (e.g., FedRep [1], FedBABU [2], FedPAC [3]). This parameter-sharing approach learns generalizable features and simplifies the personalization task of each client to the final classification layer.
Still, if model heterogeneity is desired (e.g., due to different device computation resources), pFedFDA could be extended to these settings by adopting an approach similar to FedProto[4]. Specifically, clients could collaborate on the means and covariance of the global feature distribution, without broadcasting or aggregating the backbone model. As discussed in Sec. 4.2, our generative classifier objective is similar to the regularization term and inference objective of FedProto, where we use an estimated covariance matrix rather than implicitly assuming the covariance to be the identity matrix.
[1] Exploiting Shared Representations for Personalized Federated Learning (Collins et al., ICML 2021)
[2] FedBABU: Towards Enhanced Representation for Federated Image Classification (Oh et al., ICLR 2022)
[3] Personalized Federated Learning with Feature Alignment and Classifier Collaboration (Xu et al., ICLR 2023)
[4] FedProto: Federated Prototype Learning across Heterogeneous Clients (Tan et al., AAAI 2022) | null | null | null | null | null | null |
CHASE: Learning Convex Hull Adaptive Shift for Skeleton-based Multi-Entity Action Recognition | Accept (poster) | Summary: This paper tackled the issue of inter-entity distribution discrepancies in multi-entity action recognition. The authors proposed convex hull adaptive shift method to minimize the cross entity discrepancies, where CLB and MPMMD are proposed to assist the learning procedure. The method is verified to be effective among various datasets and backbones.
Strengths: 1.This paper proposed an interesting idea by using implicit convex hull as constraints to achieve adaptive coordinate shift.
2.The proposed approach is verified to be effective on various datasets and backbones.
3.This method can serve as a good contribution to the skeleton-based human action recognition community.
Weaknesses: 1. The introduction section should be improved. For example on line 54, why do we need to achieve the discrepancy minimization? on line 52, why do we need to achieve sample adaptive coefficients? The motivation should be highlighted. The links among these proposed items should be also improved on line 57-60 and need more insights.
2. The novelty of the CLB is limited. The format of attributes learning shown in Eq.8 is commonly used to construct concept learners. What is the difference between the concept bottleneck [1] and the CLB? If you use the concept learner from some existing works, e.g., [2], will it be better than CLB?
[1] Shin S, Jo Y, Ahn S, et al. A closer look at the intervention procedure of concept bottleneck models[C]//International Conference on Machine Learning. PMLR, 2023: 31504-31520.
[2] Wang B, Li L, Nakashima Y, et al. Learning bottleneck concepts in image classification[C]//Proceedings of the ieee/cvf conference on computer vision and pattern recognition. 2023: 10962-10971.
3. More insights should be given in Section 4.2. Why does the proposed method help? The authors are encouraged to enrich the analysis.
The authors are encouraged to discuss the computational complexity brought by the proposed method.
4. The authors are encouraged to discuss the computational complexity brought by the proposed method.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Improving the Introduction Section:
a. Why is it necessary to achieve discrepancy minimization (line 54)?
b.Why do we need to achieve sample adaptive coefficients (line 52)?
c. Can the authors highlight the motivation behind these needs and improve the links among the proposed items (lines 57-60) with more insights?
2. Novelty of the CLB:
a. How does the concept bottleneck (CLB) differ from the commonly used format of attributes learning in concept learners, such as in Eq. 8?
b. What are the differences between the CLB and the concept bottleneck models discussed in Shin et al. (2023)?
c. If the concept learner from existing works (e.g., Wang et al. (2023)) were used, would it perform better than the CLB?
3. Insights in Section 4.2:
a. Why does the proposed method provide benefits?
b. Can the authors provide a more detailed analysis to enrich Section 4.2?
4. Computational Complexity:
a. What is the computational complexity introduced by the proposed method?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes in supplementary
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your encouraging and constructive comments. We appreciate your recognition of the methodology, experimental results, and contributions. We understand that your concerns may arise from the need for greater clarity and detail in presenting our motivations and methodology. Below, we address these issues one by one and have clarified them in the revised paper. We hope our response will address your concerns effectively.
----
**Q1a.** *Why is it necessary to achieve discrepancy minimization?*
These discrepancies can introduce bias into the backbone models, leading to suboptimal optimization and performance, as shown in the upper histogram of Fig. 1(b). The lower histogram of Fig. 1(b) indicates that minimizing these discrepancies helps reduce bias in the classification backbone, thereby enhancing its recognition performance on this task.
----
**Q1b.** *Why do we need to achieve sample adaptive coefficients?*
Achieving sample adaptive coefficients is a necessary step to adaptively reposition each skeleton sequence. As shown in Table 4 in our main draft, the recognition accuracy drops to 91.20\% **without sample-adaptive coefficients**. It implies that sample-adaptive weight representations are important to achieve better recognition performance.
----
**Q1c.** *Can the authors highlight the motivation behind these needs and improve the links among the proposed items with more insights?*
We have revised the paragraph to clearly present our motivation and the connections among the proposed components:
"... **Specifically, CHASE consists of a learnable parameterized network and an auxiliary objective.** The parameterized network can achieve **plausible and sample-adaptive** repositioning of skeleton sequences through two crucial components. First, the Implicit Convex Hull Constrained Adaptive Shift (ICHAS) ensures that the new origin of the coordinate system is within the skeleton convex hull. Second, the Coefficient Learning Block (CLB) provides a lightweight parameterization of the mapping from skeleton sequences to their specific coefficients in ICHAS. Moreover, **to guide the optimization of this network for discrepancy minimization**, we propose the Mini-batch Pair-wise Maximum Mean Discrepancy (MPMMD) as the additional objective. This loss function quantifies ... **In conclusion, CHASE works as a sample-adaptive normalization method to mitigate inter-entity distribution discrepancies, which can reduce bias in the subsequent backbone and enhance its multi-entity action recognition performance.**"
----
**Q2a \& b.** *What is the difference between the concept bottleneck (Shin et al. (2023)) and the CLB?*
After carefully reviewing the relevant literature [1,2], we have identified several key differences between our CLB and concept bottleneck models (CBMs):
1. **Motivation**: CBMs are primarily designed to enhance interpretability [1]. Different from CBMs, our CLB is the parameterization of a mapping, which maps the input skeleton sequence to the coefficient matrix. Thus, **while CBMs focus on interpretability, CLB is geared towards adaptively adjusting the skeleton sequence representation to reduce inter-entity discrepancies**.
2. **Inputs**: CBMs typically require input data $x\in \mathbb{R}^{d}$, binary concepts $c\in \{0,1\}^{k}$, and target responses $y\in Y$ [1,2]. However, **in multi-entity action recognition tasks, there are no binary concepts to work with**, making the direct application of CBMs unsuitable for our task.
3. **Architecture Design**: The architecture of CBMs, as implemented in [1], is expressed as $\hat{y} = f(\delta(g(x))$, where $\delta$ denotes ReLU, and $f,g$ denotes inceptionv3 \& MLP. In [2], it's based on slot attention and is formulated as $a_k = \phi(Q(c_k)^TK(F'))$, where $Q,K$ are nonlinear transformations. Our CLB, however, uses a different architecture: $W=\psi(X)=W_3\delta(W_2\phi(W_1X+b))$. This distinction highlights that **our CLB does not rely on inceptionv3+ReLu+MLP or slot attention mechanisms**.
----
**Q2c.** *If the concept learner from existing works (e.g., Wang et al. (2023)) were used, would it perform better than the CLB?*
CBMs require binary concepts [1,2], which is **not applicable to the multi-entity action recognition task**. Therefore, adapting a concept learner from existing works to our task is not feasible. Given this, it is not possible to directly compare their performance with our CLB.
----
**Q3a.** *Why does the proposed method provide benefits?*
By adaptively shifting skeleton sequences, CHASE effectively mitigates inter-entity distribution discrepancies in multi-entity skeletal data, which can unbias the subsequent classification backbone and boost their recognition performance.
----
**Q3b.** *Can the authors provide a more detailed analysis to enrich Section 4.2?*
Yes, we have provided a more detailed analysis of Section 4.2, as mentioned in Section F (line 746-757) of our initial submission:
"In Table 1, we observe that CHASE yields varying degrees of accuracy improvement across different baseline models and benchmarks. The performance gains are influenced by both the backbone models and the datasets, as CHASE functions as an additional normalization step that mitigates bias in the backbone introduced by inter-entity distribution discrepancies.
For baseline backbones, this is owing to differences in their backbone architecture design, parameter count and training objective. For example, ... "
----
**Q4:** *What is the computational complexity introduced by the proposed method?*
We have discussed the computational complexity in Section 4.3 (line 270-275) \& Section F (line 774-785) of our initial submission. As presented in Table 9, the number of trainable parameters is about **26.37 k**. We can approximate that the number of trainable parameters is increased by $(U+1+C_2)\times C_1 + C_2 \times U$. For computational complexity, the FLOPs of CHASE is approximately **2.50 M**.
---
Rebuttal 2:
Title: Looking Forward to Further Discussions
Comment: Dear Reviewer,
Thanks again for your insightful comments on our paper.
We have submitted the response to your comments and the global response with a PDF file. Please let us know if you have additional questions so that we can address them during the discussion period. We hope that you can consider rasing the score.
Thank you
---
Rebuttal Comment 2.1:
Title: To the authors
Comment: Dear authors,
thank you very much for your rebuttal. I think most of my concerns are handled and I will improve my score to 6.
Best,
---
Reply to Comment 2.1.1:
Title: Thank You for Your Positive Feedback and Consideration
Comment: Dear Reviewer,
Thank you for your positive feedback and for taking the time to review our rebuttal. We're glad that our responses addressed your concerns, and we appreciate your willingness to improve the score. Thank you again for your thoughtful consideration.
Best regards, | Summary: This paper proposes CHASE, a multi-entity skeleton data augmentation/preprocessing technique, to mitigate inter-entity distribution gaps and improve the multi-entity action recognition. Specifically, the authors formulate a new constraint called ICHAS, design a lightweight block CLB to learn the nonlinear mapping from input to the weight matrix in ICHAS, and introduce an objective to guide the discrepancy minimization in CLB training. The authors conduct comprehensive experiments to show the effectiveness of the method.
Strengths: a) The paper is well-written, easy to understand, and well-organized.
b) The method is well-motivated, well-ablated, and the experiments are presented clearly.
Weaknesses: a) There’re some typos need to be revised carefully, e.g. l131 “be be”, Table 4 “-68.56”.
b) Regarding clarity, the captions of Figures and Tables can be improved.
Technical Quality: 2
Clarity: 3
Questions for Authors: a) Figure 2 is very informative and good for understanding the whole method. However, there are still some notations needed to be explained, e.g. the green and red circles under the two skeleton coordinates. It would be helpful if the authors introduce the method details (in sections 3.1 to 3.3) with references to specific parts of Figure 2.
b) It’s good to have multiple runs and report the standard deviation. How many seed initializations exactly do the authors use?
c) From Table 1, the performance margin varies a lot. On H2O and CAD, the top-1 accuracies have increased by over 9%. Could the authors explain or provide some discussion on this point?
d) From Table 7 in the appendix, it seems like the authors use 25 ground-truth 3D joints as skeleton inputs for NTU-60 and NTU-120 datasets. Since 2D estimated joints tend to achieve better action recognition performance in dominant models, do the authors verify CHASE using 17 estimated 2D joints with COCO layout as input?
e) Table 6 shows the mixed recognition results on the entire NTU-120 dataset with X-Sub setting. Table 8 in the appendix shows the mixed recognition results on NTU-120 X-Sub and X-Set settings. Why not replace Table 6 with Table 8? Similar to Table 5 and Table 9, why not replace Table 5 with Table 9 or integrate the number of parameters into Table 1 (may discard the column of ‘Venue’)?
Small details (no need to address them, just suggestions):
a) Regarding equations (9) and (10), it would be better to clarify the definitions of sup(*) and C(E,2).
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Although the method has been verified on six benchmarks, these datasets are relatively small, with the number of categories ranging from 4 to 36, except Assembly 101 has 1380 action categories. However, the performance gain on Assembly 101 is very small (<=0.21%). The reviewer has some concerns about the generalization of this method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your detailed and constructive comments. We appreciate your recognition of the writing, motivation and experiments. We understand that some of your concerns may stem from clarity and experimental details. Below, we address these issues one by one and have clarified them in the revised paper. We hope our response will address your concerns.
---
**Q1:** *There’re some typos need to be revised carefully, e.g. l131 “be be”, Table 4 “-68.56”. Regarding clarity, the captions of Figures and Tables can be improved.*
We are grateful for your indications of typos and suggestions for improving clarity. We have checked and revised the manuscript for these issues. For example, we have revised the caption of Figure 1 to provide a more concise illustration of our motivation.
---
**Q2:** *There are still some notations needed to be explained in Figure 2. It would be helpful if the authors introduce the method details (in sections 3.1 to 3.3) with references to specific parts of Figure 2.*
We have added notation explanations to Figure 2 in the revised version. For example, the green and red circles represent feasible and infeasible $\vec{p^*}$ as formulated by Eq. 3, respectively. Additionally, we have provided details in sections 3.1 to 3.3 with references to Figure 2 to facilitate understanding.
---
**Q3:** *How many seed initializations exactly do the authors use?*
We adopt three seed initializations in most settings, which means three runs from training to evaluation.
---
**Q4:** *Why does the performance margin vary a lot?*
**As CHASE functions as an additional normalization step, the performance gains are influenced by the bias both in datasets and introduced in backbone models, especially their different architecture design, data scale, and label space.** We have discussed it in detailed in Section F (line 746-757) of our submission:
"In Table 1, we observe that CHASE yields varying degrees of accuracy improvement across different baseline models and benchmarks. ... For different benchmarks, the variations in performance margins are due to differences in data scale and label space (see Table 7). For example, ... In contrast, H2O and CAD are relatively small in data scale. The significant accuracy increase implies that our CHASE can unbias the subsequent backbone more effectively with limited training data better."
---
**Q5:** *Do the authors verify CHASE using 17 estimated 2D joints with COCO layout as input?*
We have evaluated CHASE using 17 estimated 2D joints as input on the **CAD and VD benchmarks**, as visualized in Figure 3 of our initial submission. Additionally, we have conducted experiments on the **NTU Mutual 26 dataset** using 17 estimated 2D joints. The results are reported in the following table. It demonstrates that our proposed CHASE also enhances the performance of the vanilla counterparts when using 2D joint inputs.
Method | X-Sub (\%) | X-Set (\%)
:----|:----:|:----:
CTR-GCN | ${90.10}_{(\pm0.05)}$ | ${91.35}_{(\pm0.15)}$
**+ CHASE (Ours)** | **${90.56}_{(\pm0.09)}$** | **${92.38}_{(\pm0.81)}$**
---
**Q6:** *Why not replace Table 6 with Table 8? Similar to Table 5 and Table 9, why not replace Table 5 with Table 9 or integrate the number of parameters into Table 1?*
Due to the page limit of the initial submission, we included excerpted versions (Table 5 \& 6) in the main draft and placed the full versions (Table 8 \& 9) in the supplementary material. We will replace the excerpted versions with the full tables, if the paper is accepted, as additional content pages will be allowed. Integrating the number of parameters into Table 1 might cause confusion, as the number of model parameters varies across different benchmarks. Therefore, we opted to keep the parameter details separate to maintain clarity.
---
**Q7:** *It would be better to clarify the definitions of sup(\*) and C(E,2).*
We have clarified the definitions in the revised version. The notation $\sup(\cdot)$ stands for the supremum. In Eq. (9), $\sup(\cdot)$ is used to denote the maximum value that the expression $\mathbb{E}[f(x)]-\mathbb {E} [f(y)]$ can attain, where the function $f$ is taken from a specific class of functions. $C(E,2)$ stands for the combination of $E$ things taken 2 at a time without repetition. In Eq. (10), $C(E,2)$ is used to denote the total count of possible entity pairs.
---
**Q8:** *These datasets are relatively small ... The reviewer has some concerns about the generalization of this method.*
We provide the following reasons to support the generalization capability of CHASE:
1. We have compared our approach with recent methods in multi-entity action recognition, as shown in the table below. **These methods are typically evaluated on $\leq 5$ datasets, with some using only a subset of the datasets we have utilized.** In contrast, we have verified CHASE across six diverse benchmarks, including person-to-person interactions, hand-to-object interactions, and group activities. Our approach outperforms these methods, as reported in Table 1 of our submission.
2. Additionally, we have evaluated CHASE on ASB101, which is the largest and most challenging benchmark for this task. **Given the complexity of this dataset, achieving substantial accuracy improvements is inherently difficult.** Although the improvement on ASB101 is modest, it still demonstrates the effectiveness of CHASE in a challenging scenario.
Method | Venue | \#Dataset | Max \#Category | Datasets for Multi-entity Action Recognition
:----|:----:|:----:|:----:|:----:
IGFormer | ECCV'22 | 3 | 26 | NTU Mutual 11/26, SBU (200 samples)
ISTA-Net | IROS'23 | 4 | 1380 | NTU Mutual 26, SBU (200 samples), ASB101, H2O
H2OTR | CVPR'23 | 2 | 45 | H2O, FPHA (1175 samples)
me-GCN | arXiv'24 | 3 | 1380 | NTU Mutual 11/26, ASB101
EffHandEgoNet | arXiv'24 | 2 | 45 | H2O, FPHA (1175 samples)
AHNet-Large | PR'24 | 5 | 26 | NTU Mutual 11/26, CAD, VD, PKU-MMD mutual
**Ours** | - | 6 | 1380 | NTU Mutual 11/26, H2O, ASB101, CAD, VD
---
Rebuttal 2:
Title: Looking Forward to Further Discussions
Comment: Dear Reviewer,
Thanks again for your insightful comments on our paper.
We have submitted the response to your comments and the global response with a PDF file. Please let us know if you have additional questions so that we can address them during the discussion period. We hope that you can consider rasing the score.
Thank you | Summary: This paper focuses on the interesting problem of the normalization strategy for multi-entity skeletons in skeleton-based action recognition. The proposed method is intuitive, and the authors provided detailed implementation details for reproduction. However, this work is unclear, and the experiments are unconvincing.
Strengths: This paper focuses on the interesting problem of the normalization strategy for multi-entity skeletons in skeleton-based action recognition. The proposed method is intuitive, and the authors provided detailed implementation details for reproduction.
Weaknesses: (1) The purpose of the multi-entity action recognition task is not clear. Is it to recognize each individual’s action or to classify group activities? If the purpose varies across different datasets, please clarify this in the experiment. Additionally, I am curious whether the optimal normalization strategy differs for these two purposes. For example, the method used in S2CoM seems more suitable for recognizing each individual’s action.
(2) What is the main difference between the proposed method and the simple strategy of shifting the origin of multi-entities to their common center? Please add a comparison experiment with this method.
(3) Although the normalization strategy is an important trick and can bring significant improvement in the action classification task, I still have a concern about whether it is worth conducting an additional network to achieve this simple trick by introducing extra learnable parameters. Some heuristic strategies may be more efficient and general. Accordingly, can the proposed module be transferred among different datasets without retraining the module?
Technical Quality: 2
Clarity: 2
Questions for Authors: What is the main difference between the proposed method and the simple strategy of shifting the origin of multi-entities to their common center? Please add a comparison experiment with this method.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: (1) The purpose of the multi-entity action recognition task is not clear. Is it to recognize each individual’s action or to classify group activities? If the purpose varies across different datasets, please clarify this in the experiment. Additionally, I am curious whether the optimal normalization strategy differs for these two purposes. For example, the method used in S2CoM seems more suitable for recognizing each individual’s action.
(2) What is the main difference between the proposed method and the simple strategy of shifting the origin of multi-entities to their common center? Please add a comparison experiment with this method.
(3) Although the normalization strategy is an important trick and can bring significant improvement in the action classification task, I still have a concern about whether it is worth conducting an additional network to achieve this simple trick by introducing extra learnable parameters. Some heuristic strategies may be more efficient and general. Accordingly, can the proposed module be transferred among different datasets without retraining the module?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful comments. We appreciate your recognition of the motivation behind CHASE and the extensive implementation details for reproducibility. We understand that your concerns may stem from some misunderstandings and the presentation of ablation studies. Below, we address these issues one by one and have clarified them in the revised paper. We hope our response will address your concerns.
---
**Q1:** *The purpose of the multi-entity action recognition task is not clear. Is it to recognize each individual’s action or to classify group activities? If the purpose varies across different datasets, please clarify this in the experiment. Additionally, I am curious whether the optimal normalization strategy differs for these two purposes.*
We apologize for any confusion.
1. The purpose of the multi-entity action recognition task is consistently to **recognize the comprehensive actions performed by multiple entities**, rather than individual actions. Multiple entities can include human bodies, hands, and objects. This is discussed in Section 1 (lines 21-23) and Section A.1 (lines 613-619) of our submission. This goal aligns with many related works focusing on interactive actions and group activities [11,25,85]. To avoid misunderstandings, we have clarified this better in Section.1 of the revised version.
2. The aim **does not** vary across different datasets. All experiments are conducted with the purpose of recognizing multi-entity actions. Notably, in CAD and VD benchmarks, individual labels are leveraged as an auxiliary objective to further improve the group action recognition performance, as mention in line 708-709 and 712-713. This strategy is a common practice adopted by most models training on these two datasets [83,85].
3. Exploring whether the optimal normalization strategy differs for these two purposes is indeed an interesting topic. However, it is not the focus of this paper. We will delve into this issue in future work.
---
**Q2:** *What is the main difference between the proposed method and the simple strategy of shifting the origin of multi-entities to their common center? Please add a comparison experiment with this method.*
In our initial submission, we have reported the experimental result of the comparison of CHASE and this simple strategy in Table 2, highlighting the superior performance of CHASE.
1. **The main difference is that the common center of mass (CoM) of all entities is just a subset of the search space for** $\vec{p^*}$ **in CHASE**. As indicated in line 149-150 in our manuscript, their common CoM $\bar{\vec{p}}$ is in the open convex hull of $X$, proven by simply taking all $\tilde{\alpha}_i=1/U(1\leq i\leq U)$. Therefore, **shifting the origin of multi-entities to their common center is just one possible result among all possible ones in CHASE**.
2. While it is intuitive to shift the origin to their common center, **experimental results shows that this is not always the optimal choice**. In our main draft, we have reported the experimental result of the comparison you mentioned in Table 2 (the last two rows). Our proposed CHASE achieves 91.30\% top-1 accuracy on NTU 26 X-Sub benchmark, while this simple (denoted as S2CoM†) obtains 90.79\%. This comparison implies that adopting adaptive $\vec{p^*}$ for each skeleton sequence sample is superior to this intuitive approach.
---
**Q3:** *I still have a concern about whether it is worth conducting an additional network to achieve this simple trick by introducing extra learnable parameters. Some heuristic strategies may be more efficient and general.*
**Our proposed CHASE outperforms other alternative strategies**, as mentioned in Tables 2 of our submission. We acknowledge that there is an inevitable trade-off between efficiency and performance. However, we believe our proposed CHASE strikes a good balance in this trade-off. We have addressed your concerns with extensive ablation studies in Tables 2 and 9. In our initial submission, Table 2 presents a comparison with many alternative strategies, including BatchNorm, S2CoM, and data augmentations. Our CHASE achieves **the best recognition performance**, highlighting the advantage of adaptively shifting skeletons to unbias the subsequent classification backbone. Moreover, as shown in Table 9, the number of trainable parameters is approximately 26.37k, which **only increases the number of parameters by 1\%-2\%**. These findings show the advantages of adopting CHASE for this task.
---
**Q4:** *Accordingly, can the proposed module be transferred among different datasets without retraining the module?*
In most cases, **we can't directly transfer it without retraining, as the input shapes vary across different skeleton datasets**, indicated by Table 7. However, your insightful suggestion prompted us to investigate whether CHASE can be transferred if we align the input shapes of two datasets. We conducted an experiment on a modified version of the H2O dataset, aligning its input skeleton shape to the ASB101 dataset by discarding object poses and sampling 70 frames (as used in ASB101). The results in the following table demonstrate that both the frozen CHASE module (pretrained on ASB101) and the retrained CHASE module improve the performance of the CTR-GCN backbone. This implies the transferability of our proposed CHASE, provided the skeleton sequences are aligned.
Method | Acc (\%) | $\Delta$ (\%)
:-------- | :-----: | :-----:
CTR-GCN | ${48.48}_{(\pm2.91)}$ | -
\+ CHASE (retrained) | ${56.47}_{(\pm1.59)}$ | $+7.99$
\+ CHASE (frozen, pretrained on ASB101) | ${56.61}_{(\pm4.41)}$ | $+8.13$
---
Rebuttal Comment 1.1:
Title: Rebuttal Comment
Comment: I keep my initial score, where the authors did not struggle to handle my concerns, like "transferred among different datasets without retraining the module". Besides, the multi-entity action recognition task is still confusing.
---
Rebuttal 2:
Title: Looking Forward to Further Discussions
Comment: Dear Reviewer,
Thanks again for your insightful comments on our paper.
We have submitted the response to your comments and the global response with a PDF file. Please let us know if you have additional questions so that we can address them during the discussion period. We hope that you can consider rasing the score.
Thank you
---
Rebuttal 3:
Title: Clarifications on Transferability and Multi-Entity Action Recognition
Comment: Dear Reviewer,
Thank you for your feedback. We apologize for not fully addressing your concerns in our initial response.
1. We have followed your suggestions, conducting the experiments to evaluate the transferability of our CHASE module across different datasets without retraining. Specifically, we first trained the CHASE + CTR-GCN backbone on the challenging ASB101 dataset, achieving a top-1 accuracy of 28.03%. **We then transferred the CHASE module to the H2O (two-hand version) dataset without retraining, where it achieved 56.61% accuracy.** **This result outperforms both retraining the module on H2O (56.47%) and training only the CTR-GCN backbone on H2O (48.48%).** These findings demonstrate that our proposed module can indeed be transferred among different datasets without retraining.
2. Multi-entity action recognition **aims to classify interactions involving multiple entities, which could be people, objects, or other elements within a scene.** Examples of such actions include *cheers and drink*, *exchanging things*, *walking apart,* and *talking*. Group activities are a subset of multi-entity actions [73, 75, 76]. Unlike traditional action recognition, which typically focuses on a single subject performing a single action, multi-entity action recognition addresses the complexity of interpreting actions that involve multiple participants or objects interacting simultaneously. We hope this clarification addresses the confusion.
We appreciate your insights and hope this additional information helps to resolve your concerns.
Best regards, | Summary: The paper proposes a normalization method for skeleton-based multi-entity recognition based on finding the center of mass within the convex hull of the spatio-temporal domain of the point cloud defined by the skeletons over a sequence. The main idea is to "center" the world of skeletons to unbias the subsequent detector and boost their performance. Building on this motivation, the authors find that a fixed, learnable parameterized network can be used for that purpose, facilitating the inference. The experiments demonstrate that their proposed algorithm boosts performance over the corresponding baselines
Strengths: The paper is technically sound and the authors follow a proper mathematical derivation that leads to the design of CHASE in a clever manner.
The paper includes an extensive supplementary material with code and further analysis that make the paper rather complete.
The method is elegant and simple, providing with a very efficient, lightweight network that shows provable performance on a broad variety of datasets. Ablation studies are conducted to validate the proposed parts, as well as to compare against other normalization alternatives.
The paper is well documented with an extensive coverage of related literature.
Weaknesses: Overall I believe that the presentation should be improved, clearly stating the contribution and motivation for it. It takes a good read to understand that what the authors are proposing as a lightweight network is the result of mathematically deriving an iterative approach for normalization of skeletons within their convex hull. It should be clearly stated that the method aims to accompany other methods for multi-entity activity recognition by adding an extra normalization step, which consists of what is presented in Section 3.
Similarly, the sketch depicted in Figure 1 is a bit confusing and does not really illustrate what the authors aim to solve. I would suggest the authors to provide a clear motivation example that leads to their method. The caption in Fig. 1 is rather poorly written (I did not understand it at least).
Technical Quality: 3
Clarity: 2
Questions for Authors: I would insist on the authors to please elaborate a bit better on the motivation and an example where former normalization leads to the classifiers to produce wrong results, with their method mitigating such problem.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your recognition of the methodology of CHASE, proper mathematical derivation, and extensive experiments. We understand that your concerns may stem from clarity and presentation of our contributions and motivation. Below, we address these issues one by one and have clarified them in the revised paper. We hope our response effectively addresses your concerns.
---
**Q1:** *The presentation should be improved, clearly stating the contribution and motivation for it. It should be clearly stated that the method aims to accompany other methods for multi-entity activity recognition by adding an extra normalization step, which consists of what is presented in Section 3.*
We deeply appreciate your time and thorough review of our paper. We apologize for any confusion in the presentation of our contribution and motivation. We have revised the final paragraph of the Introduction section to better articulate our contributions and motivation.
The contributions of this paper are three-fold:
1. To the best of our knowledge, we are the first to investigate the issue of inter-entity distribution discrepancies in multi-entity action recognition. Our proposed method, Convex Hull Adaptive Shift for Multi-Entity Actions, effectively addresses this challenge. **Our main idea is to adaptively repositioning skeleton sequences to mitigate inter-entity distribution gaps, thereby unbiasing the subsequent backbones and boosting their performance.**
2. **Serving as an additional normalization step for backbone models, CHASE consists of a learnable network and an auxiliary objective.** Specifically, the network is formulated by the Implicit Convex Hull Constrained Adaptive Shift, together with the parameterization of a lightweight Coefficient Learning Block, which learns sample-adaptive origin shifts within skeleton convex hull. Additionally, the Mini-batch Pair-wise Maximum Mean Discrepancy objective is proposed to guide the discrepancy minimization.
3. Experiments on NTU Mutual 11, NTU Mutual 26, H2O, Assembly101, Collective Activity Dataset and Volleyball Dataset consistently verify our proposed method by enhancing performance of single-entity backbones in multi-entity action recognition task.
Our motivation is:
When using *Vanilla* (a common practice), the estimated distributions of joints from different entities show significant discrepancies, as shown in Figure 1(a). These discrepancies can introduce bias into backbone models, leading to suboptimal optimization and poor recognition performance, as depicted in the upper histogram of Figure 1(b). Although *S2CoM* (an intuitive baseline approach) can reduce these discrepancies, it results in wrong predictions by the classifiers due to a complete loss of inter-entity information. To address the inter-entity distribution discrepancy problem, we propose a Convex Hull Adaptive Shift based multi-Entity action recognition method (CHASE). Serving as an additional normalization step, CHASE aims to accompany other single-entity backbones for enhanced multi-entity action recognition. Our main insight lies in the adaptive repositioning of skeleton sequences to mitigate inter-entity distribution gaps, thereby unbiasing the subsequent backbone and boosting its performance.
---
**Q2:** *The sketch depicted in Figure 1 is a bit confusing and does not really illustrate what the authors aim to solve. I would suggest the authors to provide a clear motivation example that leads to their method. The caption in Fig. 1 is rather poorly written.*
We apologize for any confusion caused by Figure 1. We clarify the motivation example in Figure 1 that leads to our method as follows:
1. **What we aim to solve**: When using Vanilla (a common practice), the estimated distributions of joints from different entities show significant discrepancies, as shown in Figure 1(a). **These discrepancies can introduce bias into backbone models, leading to suboptimal optimization and poor recognition performance**, as depicted in the upper histogram of Figure 1(b). Though S2CoM (an intuitive baseline approach) can mitigate the discrepancies, **it makes the classifiers produce wrong predictions due to a complete loss of inter-entity information**. Therefore, this figure visualizes the problem we aim to solve.
2. **The effectiveness of CHASE**: Figure 1(a) and the lower histogram of Figure 1(b) highlight that CHASE effectively mitigates these discrepancies. Our method helps reduce bias in the subsequent classifiers, thereby enhancing their performance in the recognition task.
In the revised version of the paper, we have updated Figure 1 and its caption to clearly illustrate the motivations and contributions mentioned above (see the pdf file in global response). The caption of Fig.1 is modified as follows:
Figure 1: **Inter-entity distribution discrepancies in multi-entity action recognition task.** (a) We delineate three distinct settings: *Vanilla* (a common practice), *S2CoM* (an intuitive baseline approach), and *CHASE* (our proposed method). Column 2 illustrates spatiotemporal point clouds defined by the skeletons over $10^4$ sequences. Column 3-5 depict the projections of estimated distributions of these point clouds onto the x-y, z-x, and y-z planes. These projections reveal significant inter-entity distribution discrepancies when using *Vanilla*. (b) The discrepancies observed in *Vanilla* introduce bias into backbone models, leading to suboptimal optimization and poor performance. Although *S2CoM* can reduce these discrepancies, it makes the classifiers produce wrong predictions due to a complete loss of inter-entity information. With the lowest inter-entity discrepancy, our method unbiases the subsequent backbone to get the highest accuracy, underscoring its efficacy.
---
Rebuttal 2:
Title: Looking Forward to Further Discussions
Comment: Dear Reviewer,
Thanks again for your insightful comments on our paper.
We have submitted the response to your comments and the global response with a PDF file. Please let us know if you have additional questions so that we can address them during the discussion period. We hope that you can consider rasing the score.
Thank you
---
Rebuttal Comment 2.1:
Title: Answer
Comment: I am happy with the provided response and I have no further questions in regards to it. I believe this paper lies above the acceptance threshold.
---
Reply to Comment 2.1.1:
Title: Appreciation for Your Support and Positive Review
Comment: Dear Reviewer,
Thank you for your positive feedback and for your confidence in our work. We appreciate your thoughtful review and are pleased that our responses met your expectations.
Thank you again for your support and your constructive comments.
Best regards, | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude to all the reviewers for their time, insightful suggestions, and valuable comments. We deeply appreciate the positive recognition from the reviewers regarding our paper’s motivation (hFtj, 16iH, WotV), the elegance and simplicity of our methodology (N2tt, WotV), the rigorous mathematical derivation (N2tt), the extensive experiments demonstrating strong performance across a wide range of datasets and backbones (N2tt, 16iH, WotV), and the thorough implementation details provided for reproducibility (N2tt, hFtj). We address the common concerns raised by the reviewers below:
*1. Presentation of our motivations and contributions.*
We have revised the final paragraph of the Introduction section, Figure 1, and caption of Figure 1 to better articulate our motivations and contributions.
The contributions of this paper are three-fold:
- To the best of our knowledge, we are the first to investigate the issue of inter-entity distribution discrepancies in multi-entity action recognition. Our proposed method, Convex Hull Adaptive Shift for Multi-Entity Actions, effectively addresses this challenge. **Our main idea is to adaptively repositioning skeleton sequences to mitigate inter-entity distribution gaps, thereby unbiasing the subsequent backbones and boosting their performance.**
- **Serving as an additional normalization step for backbone models, CHASE consists of a learnable network and an auxiliary objective.** Specifically, the network is formulated by the Implicit Convex Hull Constrained Adaptive Shift, together with the parameterization of a lightweight Coefficient Learning Block, which learns sample-adaptive origin shifts within skeleton convex hull. Additionally, the Mini-batch Pair-wise Maximum Mean Discrepancy objective is proposed to guide the discrepancy minimization.
- Experiments on NTU Mutual 11, NTU Mutual 26, H2O, Assembly101, Collective Activity Dataset and Volleyball Dataset **consistently verify our proposed method by enhancing performance of single-entity backbones in multi-entity action recognition task**.
*2. Generalization of CHASE.*
- Compared with recent related works, **we have adopted more datasets with a wide range of types of entities and action categories**. As reported in Table 1 of our submission, we have verified CHASE across six diverse benchmarks, including person-to-person interactions, hand-to-object interactions, and group activities. Notably, we have evaluated CHASE on ASB101, which is the largest and most challenging benchmark for this task, featuring over 80,000 samples and 1,380 "verb+noun" categories, with absent object poses. (16iH Q8)
- We have evaluated CHASE using **both 3D joints and estimated 2D joints** as input. (16iH Q5)
- Experimental results demonstrate that **CHASE can be transferred without retraining the module if we align the input shapes of two datasets**. (hFtj Q4)
In addition to addressing the common concerns mentioned above, we have provided detailed responses to each specific question raised by the reviewers. We hope our responses will effectively address the reviewers’ concerns, and we look forward to engaging in comprehensive discussions in the coming days.
Pdf: /pdf/a2ae8eea2ebf44053bd1f42044ddd237e007b526.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Accelerating Augmentation Invariance Pretraining | Accept (poster) | Summary: This submission proposed to accelerate the training of ViT by two methods: 1) Randomly drop tokens for input. 2) Dynamically resize patches into different dimension. The second method is published in previous works.
Strengths: N.A.
Weaknesses: 1. The novelity of the sumission is limited: In first propsoed method, dropping tokens randomly (masking) is a common trick for performance improvement (as cited by authors). It is natural to connect improvement to training acceleration. For the second method, as author point out that the method comes from published work.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. In Sec. 4.2, author mentioned that "large patches cannot be directly encoded", why large patches cannot be encoded (to what and by what?) Please add more explanation on the motivation of patch scaling.
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: N.A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are sad that the reviewer completely misunderstood the contributions and the significance of the proposed work. We made clear both in the related work and method sections that Token Dropout and Patch Scaling are NOT contributions of our work, by explicitly citing the origins of each technique. The contribution of our work is how to design an optimized acceleration schedule, based on gradient estimation error analysis, so as to optimally define how much compression to use at different phases of training, and how to optimally combine these previously proposed techniques to achieve the required compression rate. We urge the reviewer to read the paper again and carefully reassess it based on our actual contributions. We will be responsive during the author-reviewer discussion phase, in case there are any concerns we can address.
## Why large patches cannot be encoded (to what and by what?)
The first step in a VIT is to embed each $p \times p$ patch into a fixed-size embedding vector (of size $n$). This is done by a linear projection matrix of size, $p^2 \times n$ (or equivalently implemented as a conv layer of kernel size $p$). Since the projection matrix is fixed size, it can’t be applied to larger (or smaller) patches.
---
Rebuttal Comment 1.1:
Title: Discussion
Comment: Dear Reviewer,
Thank you for your thoughtful review and the time you’ve invested in evaluating our work. We have carefully addressed the points you raised in our rebuttal, and we would greatly appreciate the opportunity to clarify or discuss any remaining questions or concerns you may have.
Thank you once again for your valuable feedback.
---
Reply to Comment 1.1.1:
Title: Final comment
Comment: Please read our final message to all reviewers. We truly appreciate your efforts in reviewing our paper. We hope you consider upgrading the final score to reflect the significant improvements made to the paper during the review cycle, and hopefully a renewed understanding of the intended contributions of our work. | Summary: This paper presents an acceleration framework for Vision Transformers in contrastive learning. It utilizerandomized token dropout and patch-scaling to reduce the sequence length to accelerate training. Based on an analysis of the gradient estimation error, this paper proposes an automated procedure to identify an optimal acceleration schedule. Extensive experiments demonstrate that accelerated pretraining achieves comparable performance on visual understanding tasks, while effectively reducing computation costs.
Strengths: Strength:
1. The motivation is clear and reasonable.
2. Extensive experiments demonstrate the approach's effectiveness in reducing the training time while keeping comparable comparisons.
3. This paper is well-organized and clearly-written.
Weaknesses: Weakness:
1. Lower ceiling. It seems like that this framework will lead to a decrease in the optimal performance of pretraining. For instance, in Fig.1(b), Accelerated MoCo's highest performance is lower than MoCo, although much faster. By the way, ImageNet-1K performance is much more important than ImageNet-100, especially for a SSL method.
2. Missing important metric. This paper reports NN and Linear Probing to evaluate the approach, however, full-finetuning top-1 accuracy (on ImageNet-1K) is a critical metric for SSL method. Adding this metric can make this paper more convincing.
3. Changed structure. Since this paper utilizes flexible patch embedding, the structure is not a naive ViT anymore, which may undermine the application of this paper in many scenarios.
4. Limited evaluation. This paper only conduct experiments based on MoCo-v3, more evaluation on other SSL frameworks (e.g, another contrastive learning method or a MIM method) will make it solid.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the comments above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Please see the comments above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the valuable feedback. Below we answer the main concerns, and we will revise the paper accordingly. If there are any remaining concerns we can clarify or provide additional results/analysis, please let us know. We will be responsive during the author-reviewer discussion phase. Given the added experiments and evaluations, as well as the importance of accelerating contrastive pretraining, we hope the reviewer will reconsider the recommendation score.
## Lower ceiling
First, we note that evaluating SSL methods is not a trivial task, and different researchers use/advocate for different evaluation metrics. Having said that, the most commonly used metric for assessing representation learning of SSL methods is Linear Probing accuracy on ImageNet-1k. Under this metric, the acceleration procedure achieves the **same** performance (only 0.1% lower) to the baseline MoCo. We understand that the NN accuracy is slightly worse (1.2% lower). However, this metric is known to be more sensitive to slight changes in the model since no weights are trained to adapt the model output to the downstream task. While we find it informative (the reason why we added it to the paper), linear probing is a more important metric. Also, while we understand that ImageNet-1K results are more important than ImageNet-100, the significantly higher NN accuracy in ImageNet-100 is still worth pointing out.
## Full finetuning
Thanks for the suggestion. We plan to add full finetuning results to the paper. Unfortunately, some of the checkpoints we had trained were lost. We are in the process of retraining and will provide the results, hopefully, during the author-reviewer discussion phase.
## Changed VIT architecture
We believe there was a misunderstanding. Patch scaling is only used for accelerating training. At inference time, the learned VIT simply reverts to the original patches of size 16. No changes were made to the model’s inference computational graph.
## Evaluation beyond MoCo
This is a great suggestion. We have indeed been working on creating optimized acceleration schedules for algorithms beyond MoCo, using the technique proposed in the paper. We were able to achieve a 2.5x speedup in DINO pre-training and a 3.3x speedup in SimCLR pre-training.
| Algorithm | Acceleration | Training Budget (M) | NN (%) | LP (%) | FT (%) |
|-----------|--------------|---------------------|--------------|----------|----------|
| **SimCLR** | ✔ | 922 | 50.70 | 68.43 | 81.55 |
| | ✗ | 3075 | 50.22 | 68.33 | 81.39 |
| **DINO (4 small crops)** | ✔ | 1138 | 66.00 | 77.42 | 82.01 |
| | ✗ | 2846 | 67.36 | 77.48 | 81.87 |
where NN, LP, and FT refer to the accuracies of near neighbor, linear probing, and fine-tuning, respectively. For SimCLR, we simply replaced the backbone in the original implementation (from a ConvNet to ViT-base). As for DINO, the original method performs contrastive learning on both large and small crops. The small crops have a lower computational burden and thus can have a similar effect of speeding up training. To provide a realistic comparison to DINO, both the baseline (unaccelerated) and our accelerated version still use small crops (4 to be exact) in addition to the 2 large crops, with acceleration only applied to large crops. We will add to the paper the results above as well as an analysis of the impact of varying numbers of small crops in DINO
---
Rebuttal Comment 1.1:
Title: Discussion
Comment: Dear Reviewer,
Thank you for your thoughtful review and the time you’ve invested in evaluating our work. We have carefully addressed the points you raised in our rebuttal, and we would greatly appreciate the opportunity to clarify or discuss any remaining questions or concerns you may have.
Thank you once again for your valuable feedback.
---
Rebuttal 2:
Title: Final comment
Comment: Dear reviewer,
As promised, we evaluated the accelerated models trained with different training budgets using the full finetuning protocol and compared them to our baseline MoCo model. (We followed the MoCo-V3 paper and code in our finetuning evaluations). As can be seen in the table below, under this evaluation protocol, the speedup is even more pronounced (compared to what we showed in the paper using Linear Probe evaluations). Specifically, the model trained with about 10% of the budget already achieved a result comparable to unaccelerated training (only 0.24% worse).
Overall, the main concerns were 1) the lower ceiling in some evaluations, 2) the absence of finetuning results, 3) the changed backbone architecture, and 4) the lack of experiments beyond MoCo. Additional experiments for 2 and 4 were conducted and will be added to the paper. We believe that concern #3 was a misunderstanding and concern #1 should now be less critical, given that we observe similar performances on both linear probing and finetuning (of all conducted experiments, only nearest neighbor evaluations on IN1k showed slightly lower performance ceilings). We thank the reviewer for helping us improve our paper, and ask the reviewer to reassess the final score to account for these improvements.
| Algorithm | Acceleration | Training Budget (M) | FT (%) |
|:----:|:----:|:-----:|:----:|
| **MoCo** | ✔| 308| 80.56|
|**MoCo** | ✔ | 617 | 81.61 |
| **MoCo**| ✔ | 1080 | 81.81|
| **MoCo**| ✔ | 1542 | 81.92 |
|**MoCo** | ✗ | 6150 | 81.85 * |
\* As mentioned in the paper, our non-accelerated model is slightly worse than the officially released MoCo-v3 model. While we used the same overall training budget, we could only train with relatively smaller batch sizes of 1024 (as opposed to the original 4096), which to the best of our knowledge is the cause of the gap. However, keep in mind that, the goal of the paper is to propose and validate a training acceleration technique. The results above show that, given the same training loss (which in the case of MoCo depends on the batch size), our acceleration method can significantly speed up convergence. We expect similar convergence speedups when training with higher batch sizes. | Summary: This work focuses on speeding up contrastive learning with vision transformers. Two methods, specifically tailored to ViTs, are investigated for making pretraining more efficient: randomised token dropout and flexible patch scaling. Additionally, the authors analyse the gradient estimation errors from these methods and create an automated strategy for optimal acceleration during pretraining. The resulting approach can achieve similar performances for a fraction of the budget, specifically 1/5 for ImageNet-100 and 1/3 for ImageNet-1k.
Strengths: **Originality**
TknDrop is clearly inspired by common usage in MIM and patch scaling is taken from FlexiViT. These are therefore not original ideas but have not been explored for speeding up contrastive ViTs. The more original aspect of the paper, however, is the automatic scheduling of these techniques through gradient error monitoring.
**Quality**
The construction of the final dynamic acceleration method is methodical and thorough. Extensive ablations are performed to find the optimal combination that achieves the most effective acceleration. The experimental section overall is of very high quality.
**Clarity**
The paper is very clearly written throughout, and the structure is very natural and easy to follow. The plentiful figures and tables are effectively communicating the right information, though Figure 7 could have a larger font. It is however unclear what Lq=50 means in terms of the token dropout ratios (0, 0.25, 0.5, 0.75, 0.9). Does it mean that 50 tokens are kept out of the total 197 (meaning roughly 0.75 for the dropout ratio)?
**Significance**
The simplicity and automated nature of the method makes it seem easy for practitioners and researchers to use themselves. This can indeed facilitate a wider adoption of self-supervised pretraining for those with modest compute resources. However, while the authors point out that their narrow focus on MoCov3 + contrastive learning + ViT allows for a deeper study, the paper suffers a bit by not showing any generalisation to other types of training, like supervised pre-training on ImageNet.
Weaknesses: My main concern is that the scope of the paper is rather narrow. It does one thing and it does it well, but this limits its potential impact. The proposed method is not limited to only contrastive pretraining and a single experiment that shows how it also applies to supervised pretraining on ImageNet would help show how generally applicable it is.
It’s not quite clear how the values 1/3 and 1/5 for the budget are obtained for the caption of Figure 1. Section 6.4 claims a 4x speedup on ImageNet. These claims can be made more consistent throughout.
Technical Quality: 3
Clarity: 4
Questions for Authors: Are the speedup techniques explored in this paper orthogonal/complementary to e.g. resolution scaling or curriculum learning that have been explored in other works?
Can longer training with token dropout and patch scaling yield better results than the baseline, I.e. using the same budget of 520M on ImagetNet-100?
Are the overheads for automatically scheduling the acceleration accounted for in the budget, or are the optimal decision points for the scheduler computed offline and fixed before training?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: There is no section that discusses the limitations of the proposed method explicitly. I would like to see some of the questions I’ve asked answered in such a section, or as part of the conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the valuable feedback. We are glad to see the originality of the proposed method and the significance and quality of our empirical results appreciated. We will revise the paper to add experiments and clarify unclear points, as described below. If there are any remaining concerns we can clarify, or provide additional results/analysis, please let us know. We will be responsive during the author-reviewer discussion phase.
## Generalization to other pretraining frameworks
While we plan to investigate other pretraining frameworks such as supervised learning, vision-language pretraining, and even object detection/segmentation in a future journal extension of the paper, we focused on contrastive pre-training since contrastive learning is famously slow to converge, often requiring up to 1000 epochs to obtain optimal performance. Having said that, we have indeed been working on creating optimized acceleration schedules for algorithms beyond MoCo, using the technique proposed in the paper. We were able to achieve a 2.5x speedup in DINO pre-training and a 3.3x speedup in SimCLR pre-training.
| Algorithm | Acceleration | Training Budget (M) | NN (%) | LP (%) | FT (%) |
|-----------|--------------|---------------------|--------------|----------|----------|
| **SimCLR** | ✔ | 922 | 50.70 | 68.43 | 81.55 |
| | ✗ | 3075 | 50.22 | 68.33 | 81.39 |
| **DINO (4 small crops)** | ✔ | 1138 | 66.00 | 77.42 | 82.01 |
| | ✗ | 2846 | 67.36 | 77.48 | 81.87 |
where NN, LP, and FT refer to the accuracies of near neighbor, linear probing, and fine-tuning, respectively. For SimCLR, we simply replaced the backbone in the original implementation (from a ConvNet to ViT-base). As for DINO, the original method performs contrastive learning on both large and small crops. The small crops have a lower computational burden and thus can have a similar effect of speeding up training. To provide a realistic comparison to DINO, both the baseline (unaccelerated) and our accelerated version still use small crops (4 to be exact) in addition to the 2 large crops, with acceleration only applied to large crops. We will add to the paper the results above as well as an analysis of the impact of varying numbers of small crops in DINO.
## Relation to other speedup techniques
The main difference between the prior techniques mentioned in the paper and our work is that prior works are not tailored to VITs. Curriculum learning strategies can be applied to any model and used in conjunction with our VIT-specific techniques for potentially larger speedups.
Resolution scaling explores a similar idea to dynamic patch scaling. The main difference is that resolution scaling simply resizes the input images, while patch scaling adjusts the patch projection layer instead, in a more principled fashion (as extensively discussed in FlexiVIT). Prior work has explored resolution scaling mostly with CNNs, but it could also be extended to VITs. Given the similarities, combining resolution and patch scaling should not lead to major improvements.
## Longer training schedule
In the algorithms that we tested, longer training schedules did not lead to improved performance. This is likely because we have chosen algorithms that have been fully optimized until convergence. As can be seen in Fig 7 of the main paper, when acceleration is used for too long, the model can overfit (NN accuracy of the model trained with a constant 75% dropout ratio drops in the 2nd half of training). We found that using long schedules will often lead to the overuse of acceleration and consequently performance drops. These drops can be recovered later on, after the schedule prescribes less acceleration, but still, equal or lower performance was observed at the end of training.
## How is the scheduler computed?
The schedule is computed offline and fixed before training. The same optimized schedule is then used across many runs in the paper (eg, with different total budgets). Since the schedule is fixed, it has no additional overhead during training.
## Discussion of limitations
Thank you for pointing this out. We will add a discussion of limitations/future work to the conclusion. In particular, we will highlight the potential of the method being used for other pre-training frameworks, as well as, the potential of the proposed method to be deployed in an online fashion for truly dynamic acceleration schedules.
## What Lq=50 means?
It means that a total sequence length of 50 tokens (out of the initial 196) is fed to the query encoder.
---
Rebuttal Comment 1.1:
Title: Discussion
Comment: Dear Reviewer,
Thank you for your thoughtful review and the time you’ve invested in evaluating our work. We have carefully addressed the points you raised in our rebuttal, and we would greatly appreciate the opportunity to clarify or discuss any remaining questions or concerns you may have.
Thank you once again for your valuable feedback. | Summary: The paper presents a framework to speed up the pre-training of Vision Transformers (ViTs) in a self-supervised contrastive learning setup. The proposed method incorporates randomized token dropout and flexible patch scaling. The authors leverage this framework to analyze estimated gradient errors and its downstream performance. Additionally, they propose to determine an optimal dynamic acceleration schedule during training. Experimental findings demonstrate improvements in the convergence rate of the MoCo-v3 model across IN-100 and IN-1k datasets.
Strengths: - The proposed acceleration framework brings noticeable improvements in the pre-training convergence of MoCo-v3 on IN-100 and IN-1k.
- The framework incorporates various sequence compression strategies. The authors investigate how these strategies affect gradient estimation errors and analyze their impact on downstream performance.
- The acceleration framework includes a dynamic scheduler that adapts during training. It is validated across different training budgets and supported by ablation studies that highlight the importance of the individual contribution of token dropout and patch-scaling.
Weaknesses: - The proposed framework consists of two components: (1) randomized token dropout and (2) flexible patch scaling. Both ideas exist and have generally been studied for efficient pre-training of ViTs. For instance, the idea of dropping tokens in ViTs has been explored in various forms since their introduction. Recent research has focused on more sophisticated and targeted ways of applying token dropout in ViTs [1, 2, 3]. As a result, the main contributions of this work may be a bit limited.
- In section 4.3 - 'Since there are many more linear operations than quadratic ones, the time complexity of linear operations dominates. Thus, for the sake of simplicity, we consider the time complexity of the ViT architecture to be linear in the sequence length'. This statement seems oversimplified. While it's true that there are more distinct linear operations, this doesn't automatically mean they dominate the time complexity. I am not convinced that having more linear operations would negate the impact of the quadratic operation, even with a sequence length of ~200. This assertion requires further clarification and possibly revision.
- The concepts of randomized token dropout and patch scaling could potentially benefit ViTs in other self-supervised learning (SSL) approaches such as distillation-based SSL (DINO [4], iBOT [5]). I think there will be more contribution if the authors explored pre-training of ViTs in a broader SSL setting than being restricted to just MoCo-v3.
- I am skeptical about labeling the described approach as truly dynamic acceleration. Although it employs cost-adjusted MSE to compare strategies efficiently, the acceleration schedule is predetermined based on the analysis of intermediate checkpoints from a pre-trained model (as mentioned in section 5.1 under Dynamic Acceleration). A genuinely dynamic acceleration approach would involve real-time adjustments during training. Therefore, the method could be better described as an optimized static schedule that varies across different training stages rather than a fully adaptive, dynamic approach.
[1] Marin, Dmitrii, et al. "Token pooling in vision transformers." arXiv preprint arXiv:2110.03860 (2021).
[2] Ryoo, Michael S., et al. "Tokenlearner: What can 8 learned tokens do for images and videos?." arXiv preprint arXiv:2106.11297 (2021).
[3] Wang, Yulin, et al. "Not all images are worth 16x16 words: Dynamic transformers for efficient image recognition." Advances in neural information processing systems 34 (2021): 11960-11973.
[4] Caron, Mathilde, et al. "Emerging properties in self-supervised vision transformers." Proceedings of the IEEE/CVF international conference on computer vision. 2021.
[5] Zhou, Jinghao, et al. "ibot: Image bert pre-training with online tokenizer." arXiv preprint arXiv:2111.07832 (2021).
Technical Quality: 2
Clarity: 3
Questions for Authors: - Line 210: 'The variance is a function of their computational cost.' Could you provide references or further clarification on how variance is a direct function of computational cost?
- Line 268 - 'The only modification was the use of a non-symmetric loss.' Is there a specific reason for this, considering that MoCo-v3 uses symmetric loss?
- Line 271 - 'Non-symmetric version produces more diverse batches.' Could you elaborate on what diverse batches mean and how beneficial they are exactly?
- See weaknesses.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The limitations are not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the valuable feedback. We are glad the reviewer found the noticeable improvements in ViT contrastive pre-training convergence valuable. This is indeed the flagship result of the paper, which has not been explored in any other prior work. Given the importance of the topic (contrastive pretraining is a foundational pretraining technique that consumes considerable amounts of computational resources) and the positive results demonstrated in this paper, we hope the reviewer reconsiders its score for this reason alone. We will do our best to address the raised concerns in the main paper, as we outline below. If there are any remaining concerns we can clarify, or provide additional results/analysis, please let us know. We will be responsive during the author-reviewer discussion phase.
## Limited contribution
While it is true that both token dropout and patch scaling have been used in the literature, these techniques have not been studied for efficient gradient approximation (the main topic of our work). One exception is FLIP [22] where token dropout was used for accelerating vision language pre-training. The reviewer mentions prior work on token pruning and token aggregation techniques [1,2,3], however, these works study inference time acceleration, not acceleration of gradient computation. In our case, while we accelerate training convergence by reducing the compute requirements of each training iteration, we do not change the model architecture or its inference graph in any way. The final pre-trained model is still the original VIT.
More importantly, the paper’s main contribution is the methodology used to obtain the optimized acceleration schedule. The methodology, based on gradient estimation error analysis, is novel and, as shown in the paper. effective. Furthermore, while we focus on contrastive pretraining, the proposed methodology can be generalized to other training regimes. Thus, the publication of this work has the potential for downstream impact beyond contrastive pre-training of VIT models.
## Approximate linear time complexity of low-resolution VITs
This statement has been empirically verified. Below we show the time spent on a forward plus backward step for a VIT using varying sequence lengths. Timings were obtained using an RTX A4500 for a batch size of 16. To ensure that pure GPU computation is measured (without any CPU bottlenecks of data loading), randomly generated sequences were used at each iteration. Time measurements were averaged over 20 independent time measurements. As can be seen, the linear approximation is remarkably accurate until a sequence length of 300. In our observations, the quadratic cost of attention mechanisms only becomes a factor at sequence lengths of 750 and above.
| **#Tokens**| 25| 50 | 100 | 150 | 200 | 300 |
|-|-|-|-|-|-|-|
| **Time Spent** | 0.030 | 0.046 | 0.086 | 0.132 | 0.171 | 0.267 |
| **Linear Trendline ($R^2=0.998$)** | 0.024 | 0.047 | 0.089 | 0.133 | 0.176 | 0.263 |
## Generalization to beyond MoCo
This is a great suggestion. We have indeed been working on creating optimized acceleration schedules for algorithms beyond MoCo, using the technique proposed in the paper. We were able to achieve a 2.5x speedup in DINO pre-training and a 3.3x speedup in SimCLR pre-training.
| Algorithm | Acceleration | Training Budget (M) | NN (%) | LP (%) | FT (%) |
|---|----|-------|-----|-----|------|
| **SimCLR** | ✔ | 922 | 50.70 | 68.43 | 81.55 |
| | ✗ | 3075 | 50.22 | 68.33 | 81.39 |
| **DINO (4 small crops)** | ✔ | 1138 | 66.00 | 77.42 | 82.01 |
| | ✗ | 2846 | 67.36 | 77.48 | 81.87 |
where NN, LP, and FT refer to the accuracies of near neighbor, linear probing, and fine-tuning, respectively. For SimCLR, we simply replaced the backbone in the original implementation (from a ConvNet to ViT-base). As for DINO, the original method performs contrastive learning on both large and small crops. The small crops have a lower computational burden and thus can have a similar effect of speeding up training. To provide a realistic comparison to DINO, both the baseline (unaccelerated) and our accelerated version still use small crops (4 to be exact) in addition to the 2 large crops, with acceleration only applied to large crops. We will add to the paper the results above as well as an analysis of the impact of varying numbers of small crops in DINO.
## Naming proposed approach as dynamic vs optimized acceleration
We agree with the reviewer. “Optimized acceleration schedule” is a more appropriate description of the proposed approach. We will revise the paper accordingly. Thanks for pointing this out.
## Variance as a function of computational cost
We clarify what we meant in the next sentence (lines 210-212). In short, assume we can decide how much computational budget to provide to approximate one gradient. A simple strategy to improve this approximation is simply to use bigger batch sizes (ie, averaging the gradient over more samples). While averaging does not change estimation bias, it reduces the estimation variance. That’s why we say that variance is a function of the allocated budget. We will clarify this in the paper.
## Non-symmetric loss
Given constant GPU resources, with a non-symmetric loss we can use twice the batch size. This has two advantages, 1) the number of negatives is doubled (an important parameter in contrastive learning); and 2) sample-wise gradients are obtained from truly independent samples (ie, all gradients are obtained from different images). The second point is what we are referring to when saying that the non-symmetric versions use more diverse batches: ie, instead of using the same image to compute two losses and their respective gradients (which can be correlated), the non-symmetric version simply computes each gradient from completely different images.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for the responses. Most things have been addressed.
I'm not fully convinced by the claim that linear operations dominate the time complexity; when purely speaking asymptotically, we still refer to ViTs as having a quadratic complexity in sequence length. Perhaps consider changing the section header to reflect a focus on low-resolution settings. Overall, I would have preferred a broader focus on general contrastive settings, as promised in the abstract, rather than just MoCo. Also, DINO is not a contrastive setup; it's a distillation-based method. It's great that you're getting good results with it, but I'm unsure if it fits the contrastive theme of the paper.
---
Reply to Comment 1.1.1:
Title: Discussion
Comment: Thank you for engaging during the discussion period.
**Linear time complexity**
There's a couple of things we'd wanna add. First, we completely agree that transformers have quadratic time complexity wrt the sequence length. This quadratic complexity is a bottleneck (and thus cannot be ignored) for NLP applications, where sequence length is often in the thousands, and some vision applications like segmentation or image generation where images are processed at high resolution (640x640 and even higher).
However, contrastive learning (and most other self-supervised learning and recognition applications) has been traditionally studied at the baseline resolution of 224x224 (which with a patch size of 16x16 yields a sequence length of 196). We have not modified the resolution of these algorithms in our paper. So, while we understand the reviewer's point, we used the linear approximation simply because it simplifies how time is accounted for, and because at the standard 224x224 resolution (used in nearly all papers in contrastive learning), the approximation is very accurate. Also, note that the proposed approach would be even more effective if the quadratic operations were the dominant ones, as the impact of reducing the sequence length would be even higher. We will seek to further clarify this in the paper.
**DINO:**
While contrastive and distillation-based methods are not exactly the same, they both learn through view invariance (seeking to represent different views by the same embedding). To include DINO and SimCLR results, we would slightly modify the abstract and introduction to be slightly more general (saying "We focus our evaluation efforts on view-invariance pretraining methods including contrastive learning algorithms like MoCo and SimCLR and distillation-based methods like DINO."). Please let us know if you think this would still not be appropriate. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Federated Learning under Periodic Client Participation and Heterogeneous Data: A New Communication-Efficient Algorithm and Analysis | Accept (poster) | Summary: This paper introduces Amplified SCAFFOLD, which is an optimization algorithm for federated learning under periodic client participation. The authors prove that it achieves reduced communication cost, linear speedup, and resilience to data heterogeneity. Numerical experiments are provided to evaluate the performance of Amplified SCAFFOLD.
Strengths: For federated learning under periodic client participation, Amplified SCAFFOLD is proposed and proven to exhibit reduced communication, linear speedup, and resilience to data heterogeneity. Experimental results show that Amplified SCAFFOLD converges faster than baseline algorithms and is robust to changes in data heterogeneity and the number of participating clients.
Weaknesses: 1. It would be better to provide a detailed description of the algorithm rather than just an overview.
2. Except for the synthetic data, the experiments are based only on two image datasets (Fashion-MNIST and CIFAR-10) with classification tasks. Adding more experiments would better demonstrate the performance of Amplified SCAFFOLD.
Minor:
1. Line 60-61: Delete "non-uniform average of".
2. Line 99: Assumption 1 (a) is not precise. It should be for all $y$, not for all $x,y$.
3. Line 110: $P$ first appears without explanation.
4. Line 145: Delete "is".
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. For cycle participation, why are the subsets equally sized? Can they have different sizes? In addition, can the clients sampled with replacement?
2. In the experiments, $\bar{K}$ is chosen to be 5. What about the other choices? What is its influence?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As the authors stated, this paper only considers periodic client participation. However, this may not always be true in practice. Clients may arbitrarily join and leave the training, and they may perform different numbers of local steps in each round.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful comments on our paper. Below we have responded to your comments and questions.
Weaknesses:
1. **Detailed description of the algorithm.** Thank you for the feedback on our presentation. Section 4.1 is meant to describe the key algorithmic components of Amplified Scaffold, while the concrete definition of the algorithm is left to pseudocode in Algorithm 1. To improve the clarity of the updated version, we can include a detailed description of the algorithm, such as:
Amplified SCAFFOLD has a similar high-level structure as many FL optimization algorithms: in each round, participating clients individually perform local steps without communication (line 8), then aggregate their updates to the global model (line 12). However, the local steps (line 8) include the control variates $G_{r_0}^i$ and $G_{r_0}$, which are used to modify the local steps in order to approximate the gradient of the global objective. The control variate $G_{r_0}^i$ is the average of stochastic gradients seen by client $i$, but the average is not just taken over the last round: instead, the control variates are computed over windows of $P$ rounds (lines 17-21), in order to ensure that all clients are equally represented (in expectation). The other deviation from the conventional structure of FL algorithms is in the amplified updates (line 15). Over each window of $P$ rounds, the updates from each round are accumulated in the variable $u$. At the end of each window, all clients will have participated with equal frequency (in expectation), so $u$ should contain information from all clients. We then use $u$ to update the global model with a potentially large multiplier $\gamma$ (line 15), leveraging the fact that $u$ contains global information. Together, these two components allow Amplified SCAFFOLD to simultaneously handle non-i.i.d. client participation and heterogeneous objectives.
Please let us know if this explanation will improve the clarity of our presentation.
2. **Additional experiments.** In response to your comment about additional experiments, and comments from other reviewers, we have made two additions to the experiments of our submission. First, we added four new baselines (FedAdam, FedYogi, FedAvg-M, and Amplified FedAvg + FedProx) to the evaluation of FashionMNIST and CIFAR-10. Second, we evaluated each algorithm (including these new baselines) under a new non-i.i.d. participation pattern, which models user device availability that is both periodic and unreliable. Both of these experiments are described in detail in the general response, and the loss curves are shown in Figures 1-3 of the 1-page PDF. Here, we highlight the main findings.
The new baselines FedAdam and FedYogi are competitive for both FashionMNIST and CIFAR-10, but **Amplified SCAFFOLD still maintains the best performance in terms of training loss and testing accuracy across all baselines.**. This is consistent with the fact that FedAdam and FedYogi were designed for i.i.d. client sampling, whereas Amplified SCAFFOLD enjoys convergence guarantees under periodic participation.
We also evaluate all algorithms under a non-i.i.d. participation pattern which we refer to as Stochastic Client Availability (SCA). On CIFAR-10 with SCA participation, Amplified SCAFFOLD again reaches the best training loss and testing accuracy, showing that **Amplified SCAFFOLD performs well under multiple non-i.i.d. participation patterns**.
Questions:
1. **Variations on group subsampling.** In your review, you asked "For cycle participation, why are the subsets equally sized? Can they have different sizes?" In our paper and in related works [4, 34], the subsets in cyclic participation are usually equally sized for simplicity. However, this is not a requirement. In fact, for our additional experiment (see general response \#2), the number of available clients randomly sampled and changes throughout training. This participation pattern is more flexible than cyclic participation, and may better model the unpredictability of real-life client devices.
Also, you asked "In addition, can the clients sampled with replacement?" If we understand your question correctly, you are asking whether the same client can be sampled between two consecutive rounds. If so, the answer is yes. At every round, $S$ clients are sampled, and these clients are not removed from the set of available clients for the next round. In other words, the sampling mechanism does not cycle through all clients before return to previous clients.
2. **Effect of $\bar{K}$.** In your review, you asked "In the experiments, $\bar{K}$ is chosen to be 5. What about the other choices? What is its influence?" We chose $\bar{K} = 5$ following [34], but this parameter can vary. One way to understand the role of this parameter is notice that $\bar{K} = 1$ corresponds to full client availability, while large values of $\bar{K}$ mean that there are many small groups of clients which are usually not available. Therefore, larger values of $\bar{K}$ means that participation is in some sense ``further from i.i.d.". We expect that larger $\bar{K}$ means that the optimization problem is more difficult, and will require more communication rounds, which aligns with the communication cost of Amplified SCAFFOLD in the last row of Table 1.
In our experiments, data heterogeneity is created by allocating different data distributions to different clients. With $N=250$ clients and $C=10$ classes, the first 25 clients have a majority of their data from class 1, the next 25 clients have a majority of class 2, etc. The choice $\bar{K} = 5$ yields 50 clients per group, so that each group contains clients with a majority label from two different classes. We believe that this is a good intermediate value for $\bar{K}$: it is not so small that we are too close to the i.i.d. regime ($\bar{K} = 1$), but it is not so large that one class out of ten dominates training in each round.
---
Rebuttal Comment 1.1:
Comment: Thank you for your efforts in reviewing our paper. In our rebuttal, we responded to the questions and concerns in your review. In particular, we included a detailed description of the algorithm which we can use to improve presentation, additional experiments including an evaluation under a new client participation pattern, and a discussion of the parameters determining participation. Please let us know if we have addressed your concerns, or if you have any more questions. We are happy to continue discussion.
Best,
Authors
---
Rebuttal 2:
Comment: There seems to be some confusion around the notation, so let us clarify. In our paper, $\bar{K}$ represents the number of groups for a cyclic participation pattern, and we do not use the variable $K$ anywhere in the paper (other than when using the notation of related works in Appendix E, where $K$ represents the number of local steps). In your previous comment you asked about $K$, and we assumed that you meant $\bar{K}$, and we used the notation $K$ to remain consistent with your question. We apologize for the inconsistency. With this in mind, we can restate the answer to your previous question as: the number of groups $\bar{K}$ does not affect the heterogeneity among client data, it only affects how clients participate in training. Again, we follow the experimental setting of [(Wang & Ji, 2022)](https://arxiv.org/abs/2205.13648), which uses a single choice for $\bar{K}$.
The communication complexity of Amplified SCAFFOLD depends on $\bar{K}$, and the same dependence is present in the complexity of Amplified FedAvg, while Amplified FedAvg also contains a much larger term of order $\epsilon^{-4}$. Therefore changes to $\bar{K}$ should not affect the relative performance of Amplified SCAFFOLD compared to Amplified FedAvg. Also, the choice of a single $\bar{K}$ in both our experiments and those of [(Wang & Ji, 2022)](https://arxiv.org/abs/2205.13648) is consistent with practical scenarios where cyclic participation might arise: when device availability corresponds to geographic location, the number of groups $\bar{K}$ should remain fixed even as the number of users $N$ increases, because there are a limited number of time-zones around the world.
---
Rebuttal Comment 2.1:
Comment: Thank the authors for response. I acknowledge that $\bar{K}$ should be fixed in one setting. However, evaluating different values of $\bar{K}$ does not imply that $\bar{K}$ is dynamic during a single training session. Instead, it allows for a more comprehensive comparison of the performance of Amplified SCAFFOLD across different settings. Since the experiments are simulations, it is both possible and reasonable to choose a value for $\bar{K}$ other than 5.
I also acknowledge that changes to $\bar{K}$ should not affect the relative performance of Amplified SCAFFOLD compared to Amplified FedAvg **theoretically**. However, there may be inconsistencies between empirical and theoretical performance. Therefore, I believe it is still necessary to evaluate different values of $\bar{K}$.
---
Rebuttal 3:
Comment: We agree that the theory may not perfectly predict the practical performance, so that it is important to evaluate different values of $\bar{K}$. Below we included the results of an additional experiment that evaluates Amplified SCAFFOLD and the four major baselines considered in the paper: FedAvg, SCAFFOLD, and Amplified FedAvg. We evaluated each algorithm for the Fashion-MNIST dataset, keeping all of the same settings as stated in the main paper, other than setting $P = 24$ and varying $\bar{K}$ over $\bar{K} = 2, 4, 6, 8$. We only change $P$ from $20$ to $24$ so that $P / \bar{K}$ is an integer, for the sake of simplicity. The training loss and testing accuracy reached at the end of training is shown in the tables below:
| Train Loss | $\bar{K}=2$ | $\bar{K}=4$ | $\bar{K}=6$ | $\bar{K}=8$ |
|---|---|---|---|---|
| FedAvg | 0.00216 | 0.00222 | 0.00229 | 0.00236 |
| SCAFFOLD | 0.00156 | 0.00154 | 0.00161 | 0.00160 |
| Amplified FedAvg | 0.00206 | 0.00208 | 0.00210 | 0.00213 |
| Amplified SCAFFOLD | **0.00150** | **0.00147** | **0.00150** | **0.00148** |
| Test Accuracy | $\bar{K}=2$ | $\bar{K}=4$ | $\bar{K}=6$ | $\bar{K}=8$ |
|---|---|---|---|---|
| FedAvg | 79.596% | 78.527% | 77.69% | 76.999% |
| SCAFFOLD | 83.064% | 83.357% | 82.695% | 83.072% |
| Amplified FedAvg | 80.902% | 80.507% | 80.243% | 79.891% |
| Amplified SCAFFOLD | **83.643%** | **83.964%** | **83.715%** | **83.957%** |
We draw several conclusions from these results:
1. Amplified SCAFFOLD achieves the best training accuracy and testing loss across all algorithms, for every choice of $\bar{K}$.
2. FedAvg and Amplified FedAvg get worse as $\bar{K}$ increases, while SCAFFOLD and Amplified SCAFFOLD get better as $\bar{K}$ increases. It makes intuitive sense for an algorithm to degrade as $\bar{K}$ increases, since a larger $\bar{K}$ means that the participation is in some sense "further" from i.i.d. participation. Still, Amplified SCAFFOLD (and SCAFFOLD) are able to maintain performance even as $\bar{K}$ increases.
3. Amplified SCAFFOLD is the only algorithm whose final training loss with $\bar{K} = 8$ is better than the final training loss with $\bar{K} = 2$.
These experimental results show that Amplified SCAFFOLD is robust to changes in the number of groups $\bar{K}$ under cyclic participation. While the worst-case communication complexity of Amplified SCAFFOLD (listed in Table 1) actually increases with $\bar{K}$, our experiments demonstrate that in practice, Amplified SCAFFOLD can maintain performance as $\bar{K}$ increases. We hope that this addresses your concern that different values of $\bar{K}$ should be evaluated: please let us know if you are satisfied.
Best,
Authors
---
Rebuttal Comment 3.1:
Comment: Thank the authors for conducting the additional experiments. It's great to see that Amplified SCAFFOLD outperforms all the baselines across different values of $\bar{K}$. I am willing to increase my score.
The conclusion that "SCAFFOLD and Amplified SCAFFOLD get better as $\bar{K}$ increases" might not be accurate. I am curious about how Amplified SCAFFOLD and SCAFFOLD are able to maintain performance as $\bar{K}$ increases. Could the authors provide some insights or reasons behind this observation?
---
Rebuttal 4:
Comment: It is difficult to explain this observation from a theoretical perspective: SCAFFOLD does not have any theoretical guarantees under cyclic participation, while the communication complexity of Amplified SCAFFOLD indeed increases with $\bar{K}$. One explanation is that the complexity in Table 1 represents the worst-case over all objectives satisfying the assumptions, and the performance for this particular task could be much better than the worst case.
Empirically, it appears that the information contained the control variates of SCAFFOLD and Amplified SCAFFOLD is sufficient to avoid the affect of longer participation cycles. Essentially, the modified update direction using the control variate appears to be a good estimator for the global gradient, even when the control variates are not updated for a long time due to larger values of $\bar{K}$. However, it is difficult to say for sure without further investigation.
---
Rebuttal Comment 4.1:
Comment: Thank the authors for the explanation. It might be interesting to further investigate the underlying reasons. | Summary: The paper proposes a new algorithm named Amplified SCAFFOLD for federated learning in environments with periodic client participation and heterogeneous data. The authors address the realistic setting where clients (e.g., mobile devices) are not always available for participation. The proposed algorithm aims to achieve linear speedup, reduced communication rounds, and resilience to data heterogeneity under non-convex optimization. The paper includes theoretical analysis demonstrating the benefits of Amplified SCAFFOLD over existing methods and provides experimental results to validate the proposed approach.
Strengths: 1. The introduction of Amplified SCAFFOLD is well-motivated. The algorithm's design to handle non-i.i.d. client participation and its ability to provide tighter guarantees than previous work is well-supported by their theoretical analysis.
2. The experiments with synthetic and real-world data (e.g., Fashion-MNIST and CIFAR-10) demonstrate the algorithm's effectiveness and robustness under various conditions.
Weaknesses: 1. Some assumption is too strong, i.e., bounded objective function gap $f(\\mathbf{x}) - f\_{\\min}$ for all $\\mathbf{x} \\in \\mathbb{R}^d$ in assumption 1(a), which is not satisfied for coercive objective function, a commonly used condition in optimization. In addition, previous works [1,2] only utilized the gap between the initial point and the optimum $f(\\mathbf{x}_0) - f\_{\\min}$.
2. While the paper provides comparisons with several baselines, it would be helpful to see related works with more recent state-of-the-art methods that might also address similar challenges, e.g., [3].
[1]. Karimireddy, S. P., Kale, S., Mohri, M., Reddi, S., Stich, S., & Suresh, A. T. (2020). Scaffold: Stochastic controlled averaging for federated learning. International conference on machine learning.
[2]. Wang, S., & Ji, M. (2022). A unified analysis of federated learning with arbitrary client participation. Advances in Neural Information Processing Systems.
[3]. Crawshaw, M., Bao, Y., & Liu, M. (2024). Federated learning with client subsampling, data heterogeneity, and unbounded smoothness: A new algorithm and lower bounds. Advances in Neural Information Processing Systems.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can the authors explain from a technical point of view how their amplified SCAFFOLD algorithm is robust against data heterogeneity? I can see that the weights of the participating customers are amplified, but no other mechanism to mitigate data heterogeneity has been identified.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for you effort in reviewing our paper. Below we have responded to your thoughts and answered your questions.
Weaknesses:
1. **Meaning of Assumption 1(a).** We addressed this point in the general response: our Assumption 1(a) contains a typo. Actually, the correct version of our Assumption 1(a) matches the references you mentioned, and we will fix this typo in the updated version.
2. **Comparison against similar methods** In your review, you wrote "it would be helpful to see related works with more recent state-of-the-art methods that might also address similar challenges, e.g. [3]". From the empirical perspective, we included additional experiments to compare against several baselines; see \#1 of the general response for more information. From the theoretical perspective, we are not aware of additional works not cited in our paper that achieve convergence guarantees under non-i.i.d. client participation. The reference you mentioned [3] provides an algorithm for FL under $(L_0, L_1)$-smoothness with i.i.d. client sampling, which is an orthogonal problem to the non-i.i.d. client participation which motivates our paper. In fact, in the smooth setting (i.e. $L_1 = 0$), the algorithm of [3] degenerates to the SCAFFOLD algorithm with i.i.d. client sampling, which we have already compared against. We can refer to this work in the updated version of our paper, and discuss its relation to our work.
Questions:
1. **How does Amplified SCAFFOLD avoid heterogeneity?** The main algorithmic component that creates robustness against heterogeneity is the joint effect of control variates with amplified updates. We use the control variates to modify the local update for each client in order to approximate an update on the global objective (see Line 8 of Algorithm 1). Our algorithm combines these control variates with amplified updates across windows of multiple rounds; see Section 4.1 for a description of these algorithmic components. Section 4.2 presents convergence guarantees for this algorithm (reduced communication, robustness to heterogeneity, linear speedup), and Section 4.3 explores the implications for various non-i.i.d. participation patterns.
---
Rebuttal Comment 1.1:
Comment: Thank you for reviewing our paper. In our rebuttal, we responded to your questions and concerns. We pointed out that your skepticism of assumption 1(a) is due to a typo, included an empirical comparison against more baselines (general response) and a discussion of [3] (weakness #2), and described our algorithm's mechanism for avoiding heterogeneity. Please let us know if we have addressed your concerns.
Best,
Authors | Summary: This work addresses the limitations of federated learning under realistic client participation patterns, specifically focusing on nonconvex optimization. The proposed algorithm, Amplified SCAFFOLD, achieves linear speedup, reduced communication, and resilience to data heterogeneity without requiring strong assumptions. Compared to previous methods, it significantly reduces the required communication rounds for finding a $\epsilon$-stationary point in cyclic participation scenarios. The analysis provides tighter guarantees, and experimental results on both synthetic and real-world data demonstrate the algorithm's effectiveness.
Strengths: 1.) The paper is well written
2.) There is extensive theoretical analysis
3.) Supported by experiments on baselines.
Weaknesses: See the questions section.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1.) While setting up the problem in section 3, I think having a notation table would have been beneficial. Because there are a ton of notations in the paper, many of which are defined later.
2.) In Assumption 1.(a) what is the $\Delta$ and what is the intuitive meaning behind this assumption? Does it have anything to do with the non-negativity of the loss function?
3.) Typo: In line 133, the authors have mentioned $\bar{q}_{r0}$.
4.) Do the authors use the assumption of data heterogeneity mentioned in Table 1 in their proof? If not, then how did they avoid to use it?
5.) As the authors say, the work is a culmination of algorithmic components from references [14] and [34] (references from the paper), and I agree that analysis is by no means trivial. So, combining the algorithmic component of Amplified FedAvg paper and Fedprox would also provide a better result. Can this be checked experimentally, or did the authors observe anything?
6.) Since the papers signify the main contribution to achieving communication efficiency, I would also like to see some experiments with algorithms like FedAdam and FedYogi. Along similar lines of work, in [1] (check below), the authors showed that momentum-based methods can reduce client drift and perform better than methods like SCAFFOLD for full participation.
7.) Did the authors also check the efficacy of the proposed algorithm with different participation strategies, as mentioned in the paper?
[1] Cheng, Ziheng, et al. "Momentum benefits non-iid federated learning simply and provably." arXiv preprint arXiv:2306.16504 (2023).
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes, the authors have mentioned the limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful suggestions. We have responded to your thoughts and questions below.
Questions:
1\. **Table of notation.** Thank you for this suggestion. We agree that it would make the presentation more clear and we will provide a table of notation in the updated version.
2\. **Meaning of Assumption 1(a).** As we said in the general repsonse, our Assumption 1(a) actually contains a typo: the correct version should say $f(x_0) - \min_{x \in \mathbb{R}^d} f(x) \leq \Delta$, which is a standard condition in convergence proofs.
3\. **Typo on line 133.** Thank you for pointing this out. The variable $\bar{q}_{r_0}^i$ first appears on Line 132, and immediately after we refer to its definition in Algorithm 1, but the reference should point to line 17 of Algorithm 1, not line 18.
4\. **Usage of heterogeneity assumption.** Since multiple reviewers addressed this point, we discussed it in our general response. Our main message is that we do not use any heterogeneity assumption in the proof, because the necessity of this assumption is completely eliminated by the use of control variates. This circumstance is the same as in SCAFFOLD: the analysis of SCAFFOLD does not require a heterogeneity assumption, but such an assumption is referenced in their paper because it is used by baselines.
5\. and 6\. **Comparison with additional baselines.** In your review, you requested that we compare against additional baselines such as FedAdam, FedYogi, FedAvg-M, and the combination of Amplified FedAvg with FedProx. We agree that these are important baselines to consider, so we have evaluated all four of these baselines in the experimental settings of the main paper. The results are thoroughly discussed in the general response, and the loss curves are shown in Figures 1 and 2, but we highlight the main findings here.
For both FashionMNIST and CIFAR-10, **Amplified SCAFFOLD achieves the best training loss and testing accuracy among all baselines, including the four new baselines.** In general, FedAdam and FedYogi were competitive, outperforming every algorithm except for Amplified SCAFFOLD for CIFAR-10, and every algorithm except for Amplified SCAFFOLD and SCAFFOLD for FashionMNIST. Ultimately these algorithms fell short of Amplified SCAFFOLD, which is consistent with the fact that these baselines were designed for i.i.d. client participation, whereas Amplified SCAFFOLD has convergence guarantees under periodic participation.
In response to your question about combining Amplified FedAvg with FedProx, we note that Amplified FedProx did not perform better than Amplified FedAvg. This means that combining amplified updates with control variates is much more effective than combining amplified updates with FedProx regularization.
Please see the general response for a detailed discussion of this additional experiment.
7\. **Experiments with different participation strategies.** In response to your question about different participation strategies, we have included an additional experiment under a new non-i.i.d. participation pattern. We refer to this pattern as Stochastic Cyclic Availability (SCA), and we believe that it captures user device availability which is both periodic and unreliable. We thoroughly discussed the details of this participation pattern and the results of the experiment in the general response, and we highlight the results below. Figure 3 of the 1-page PDF shows the loss curves for all algorithms (including the four additional baselines) for CIFAR-10 under SCA participation.
From Figure 3, we conclude that **Amplified SCAFFOLD outperforms all baselines under a second non-i.i.d. participation pattern**. Amplified SCAFFOLD reaches the lowest training loss and highest training accuracy out of all algorithms, and even reaches a slightly lower training loss under SCA participation than cyclic participation (shown in Figure 2). We believe that this experiment bolsters the empirical validation of our proposed algorithm, demonstrating that Amplified SCAFFOLD performs well under multiple non-i.i.d. participation patterns.
Please see the general response for the full details of this experiment.
---
Rebuttal 2:
Comment: Thank you again for your efforst in the review process. We responded to your questions and concerns in our rebuttal. Specifically, we addressed your request for additional experiments (#5-7) and discussed the heterogeneity assumption (#4). Please let us know if we have addressed your concerns, and if you have any more questions. We are happy to continue discussion.
Best,
Authors
---
Rebuttal Comment 2.1:
Comment: Thank you for your response; my concerns have been addressed. I want to keep my score.
---
Reply to Comment 2.1.1:
Comment: Thank you for the feedback. Do you have any additional concerns which limit the paper from receiving a higher score? If not, we kindly suggest that you consider raising the score, since we have fulfilled the request for more experiments and answered all questions. Thank you.
Best,
Authors
---
Rebuttal 3:
Comment: **Amplified FedProx**: Our implementation of Amplified FedProx is obtained by starting from the implementation of Amplified FedAvg [(Wang & Ji, 2022)](https://arxiv.org/abs/2205.13648), and adding the FedProx regularization term $\mu/2 \lVert x - x_r \rVert^2$ to the objective of each local client, where $x_r$ denotes the global model from the last synchronization step. Table III from [1] shows that FedAvg and FedProx perform very closely in most settings without systems heterogeneity, and this is consistent with our results: The curves for FedProx are almost overlapping with those of FedAvg, and the curves for Amplified FedProx are almost overlapping with those of Amplified FedAvg. We believe that the experimental results of [1] are consistent with ours.
**Avoiding heterogeneity assumption**: The essential technique that allows SCAFFOLD (and Amplified SCAFFOLD) to avoid the heterogeneity assumption is to modify the direction of each local update to approximate an update on the global gradient: we show some details below.
The standard analysis of FedAvg wants to show descent of the global objective $f$, but the update directions depend on $\nabla f_i$ (i.e. the local gradients), so the rate of descent will be slowed down by the difference between $\nabla f$ and $\nabla f_i$. More concretely, the analysis involves the difference between the global gradient $\nabla f(x_{r,k}^i)$ and the update direction $\nabla f_i(x_{r,k}^i)$, which is: $\lVert \nabla f_i(x_{r,k}^i) - \nabla f(x_{r,k}^i) \rVert$. **Bounding this term requires the heterogeneity assumption**, but this might not be necessary if we change the update direction...
For SCAFFOLD, the update direction is changed to $\nabla f_i(x_{r,k}^i) - G_r^i + G_r$, where $G_r^i$ is the average of stochastic gradients encountered by client $i$ during the round before $r$, and $G_r$ is the average of $G_r^i$ over $i$. Therefore, we can bound $\lVert G_r^i - \nabla f_i(x_{r,k}^i) \Vert$ and $\lVert G_r - \nabla f(x_{r,k}^i) \rVert$ **by smoothness alone, without requiring any additional assumptions**. This is very helpful for the convergence analysis for the following reason: when we have to compare the descent direction $\nabla f(x_{r,k}^i)$ against the local update direction $\nabla f_i(x_{r,k}^i) - G_r^i + G_r$, we can use the triangle inequality to bound $$
\lVert (\nabla f_i(x_{r,k}^i) - G_r^i + G_r) - \nabla f(x_{r,k}^i) \rVert \leq \lVert \nabla f_i(x_{r,k}^i) - G_r^i \rVert + \lVert \nabla f(x_{r,k}^i) - G_r \rVert.
$$ Again, both of the terms on the RHS can be bounded with smoothness alone. Therefore, **the difference between the global gradient and the update direction can be bounded only with smoothness: no heterogeneity assumption necessary**.
Please note that the above analysis omits certain details for the sake of brevity, but the key idea is there: control variates eliminate the need for bounded gradient dissimilarity. We use a similar essential idea in our analysis, but the proof is technically more complicated due to the added difficulty of non-i.i.d. participation. Please let us know if you have further questions on this topic.
---
Rebuttal Comment 3.1:
Comment: Thank you for the response. I understand that FedAvg and Fedprox perform very closely without system heterogeneity, but according to [1], SCAFFOLD also performs very closely in settings without system heterogeneity. I am not convinced about this gap in Amplified SCAFFOLD and Amplified Fedprox for the FMNIST dataset. I am sorry if I am missing something.
---
Reply to Comment 3.1.1:
Comment: You're right, thank you for clarifying your question: the results in [1] do show that SCAFFOLD is close to FedAvg and FedProx. The discrepancy between [1] and our results is likely due to differences in neural network architectures, due to the fact that the performance of SCAFFOLD seems to be very sensitive to network depth [(Yu et al, 2022)](https://arxiv.org/abs/2207.06343). Indeed, Table 3 of the original SCAFFOLD paper [(Karimireddy et al, 2019)](https://arxiv.org/abs/1910.06378) shows that SCAFFOLD outperforms FedAvg and FedProx when training a logistic regression model (1 layer), while [(Yu et al, 2022)](https://arxiv.org/abs/2207.06343) showed that SCAFFOLD can struggle even with 4 layer networks. Our Fashion-MNIST results you refer to are using a logistic regression model, so our results are consistent with Table 3 of [(Karimireddy et al, 2019)](https://arxiv.org/abs/1910.06378). Notice that in our CIFAR-10 experiments, we use a two layer NN and already SCAFFOLD not much better than FedAvg and FedProx, which is consistent with Table III of [1], while Amplified SCAFFOLD maintains a significant advantange.
Thank you for the thought-provoking question! We hope that this clears up our experimental results in the context of the literature. | Summary: In this paper, the authors examine realistic participation scenarios, including cyclic client participation and arbitrary participation patterns. They focus on a non-convex optimization setting, which is common in practical applications but challenging to address. To tackle this, they introduce a novel method called Amplified Scaffold, designed to effectively correct client drift—a common issue where clients' updates diverge from the global model. The Amplified Scaffold method builds upon existing techniques but enhances their effectiveness in this more complex setting. The authors not only provide theoretical convergence guarantees, ensuring that their method is mathematically sound, but also back up their claims with extensive experimental results. These experiments demonstrate the practical effectiveness of Amplified Scaffold in various scenarios, highlighting its potential for real-world applications.
Strengths: In this work, the authors provide a detailed convergence analysis of a modified version of the well-known Scaffold method. While the theoretical framework appears to be sound, I did not thoroughly review all the proofs, so there may be some oversights. Nonetheless, the results obtained seem normal, adequate, and align with expectations.
The authors consider a wide range of settings and sampling schemes, which makes their work relevant to a broad audience, including both researchers and practitioners. This comprehensive approach addresses a significant problem in the field, highlighting its importance for real-world applications.
Additionally, the proposed method undergoes extensive experimental testing. These experiments validate the method's effectiveness and demonstrate its practical applicability in various scenarios. The combination of theoretical analysis and experimental validation strengthens the credibility and utility of the proposed approach.
Weaknesses: 1) The related work section in your paper is limited and could be significantly strengthened by including discussions of several closely related methods. For instance, a closely related method to Scaffold is ProxSkip (also referred to as Scaffnew), which was proposed as an improved version of Scaffold. Additionally, the Tamuna method, which incorporates partial participation, is relevant. Including a discussion about the connection to these methods will provide a broader context and highlight the advancements in the field.
- Mishchenko, Konstantin, et al. "Proxskip: Yes! Local gradient steps provably lead to communication acceleration! Finally!" International Conference on Machine Learning. PMLR, 2022.
- Condat, Laurent Pierre, Grigory Malinovsky, and Peter Richtárik. "Tamuna: Accelerated federated learning with local training and partial participation." (2023).
Furthermore, an arbitrary sampling framework has been studied in several recent papers. It would be beneficial to compare the assumptions on the sampling schemes used in these studies with those in your work. This will help clarify the differences and similarities, as well as the advantages and limitations of the various approaches.
- Tyurin, Alexander, et al. "Sharper Rates and Flexible Framework for Nonconvex SGD with Client and Data Sampling." Transactions on Machine Learning Research.
- Grudzień, Michał, Grigory Malinovsky, and Peter Richtárik. "Improving Accelerated Federated Learning with Compression and Importance Sampling." Federated Learning and Analytics in Practice: Algorithms, Systems, Applications, and Opportunities.
Additionally, it is worth discussing the paper that addresses optimal complexity for the non-convex federated learning setting. This paper provides insights into achieving optimal communication complexity in distributed non-convex optimization, which is highly relevant to your study.
- Patel, Kumar Kshitij, et al. "Towards optimal communication complexity in distributed non-convex optimization." Advances in Neural Information Processing Systems 35 (2022): 13316-13328.
By incorporating these discussions, your related work section will be more comprehensive and provide a clearer understanding of how your proposed method fits within the broader landscape of federated learning research. This will not only strengthen your paper but also demonstrate the relevance and impact of your contributions.
2) The major issue in your paper is that in Table 1, the comparison between methods and data heterogeneity is defined using the condition $ \sup_x \left\Vert\nabla f_i(x) - \nabla f(x)\right\Vert \leq \kappa $, which implies uniformly bounded heterogeneity. However, in the original Scaffold paper, the authors used a different measure for bounded gradient dissimilarity:
$$ \frac{1}{N} \sum_{i=1}^N \left\Vert \nabla f_i(x) \right\Vert^2 \leq G^2 + B^2 \left\Vert \nabla f(x) \right\Vert^2, \forall x. $$
This discrepancy is significant because uniformly bounded heterogeneity is a more restrictive assumption compared to bounded gradient dissimilarity. The latter is a more common and practical assumption in federated learning literature, as it better captures the variations in real-world data distributions.
I strongly recommend revising Table 1 to use the less restrictive assumption of bounded gradient dissimilarity instead of uniformly bounded heterogeneity. This will make your comparisons more relevant and applicable to a broader range of scenarios.
Additionally, the notation $\kappa$ used for bounded heterogeneity is potentially confusing. Typically, $\kappa$ is used to denote the condition number, $\kappa = \frac{L}{\mu}$, in the context of strongly convex functions. To avoid this confusion, I suggest changing the notation for bounded heterogeneity to something more standard and distinct, which will make your paper clearer and more consistent with established conventions in the field.
3) Assumption 2 is quite complex and may be challenging for readers to fully grasp. I recommend providing a more detailed discussion of the class of sampling schemes considered under this assumption. Specifically, elaborating on the types of sampling schemes included, their characteristics, and why they are relevant to your study will help clarify the underlying assumptions. This additional context will make it easier for readers to understand the scope and implications of your assumptions, thereby enhancing the clarity and impact of your paper.
4) In the experimental section, the rationale behind the choice of hyperparameters is not clearly explained. To improve clarity, I recommend providing a detailed explanation of the specific hyperparameters selected for your experiments. This should include the criteria used to choose these values, how they were tuned, and any relevant considerations or trade-offs. Offering this additional context will help readers understand why particular hyperparameter settings were used and how they influence the outcomes of your experiments. This level of detail will enhance the reproducibility of your work and provide valuable insights for others seeking to build upon your research.
5) The main issue with the theoretical results is that they do not generalize the Scaffold method in the i.i.d. setting. For Scaffold, the convergence rate is $ \frac{\Delta L}{\varepsilon^2} \left(\frac{N}{S}\right)^{\frac{2}{3}} $, while for Amplified Scaffold, the rate is $\frac{\Delta L}{\varepsilon^2} \left(\frac{N}{S}\right) $. This discrepancy means that Amplified Scaffold does not recover the more favorable convergence rate of the original Scaffold method under i.i.d. conditions.
It would be a significant improvement if Amplified Scaffold could match the original Scaffold's rate in the i.i.d. setting. Demonstrating this equivalence would not only strengthen the theoretical results but also enhance the practical applicability of Amplified Scaffold. If you can achieve this and show that Amplified Scaffold can indeed recover the original Scaffold's rate in the i.i.d. setting, I am willing to increase the score. This adjustment would highlight the robustness and efficiency of the Amplified Scaffold method, making it a more compelling contribution to the field.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please check the Weaknesses section.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The limitations section is well-written, but it should also note that the Amplified Scaffold method does not recover the rate of the original Scaffold method in the i.i.d. setting.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful comments on our paper. Below we have responded to your questions and concerns.
Weaknesses:
1. **Missing related work.** Thank you for pointing out these works. We agree that including these works in the discussion of our paper will establish a broader context of the FL literature, especially around problems of client sampling. Below we've included a brief comparison of each work you listed against our submission:
(a) Mischenko et al, 2022 (ProxSkip/ScaffNew): ScaffNew achieves reduced communication rounds for FL with full participation on strongly convex problems.
(b) Condat el al, 2023 (Tamuna): Tamuna uses communication compression and local steps together to reduce communication in FL with i.i.d. participation on strongly convex problems.
(c) Tyurin et al, 2023: Improves convergence rate of nonconvex SGD for finite sum problems in terms of the smoothness constants, based on various unbiased data sampling protocols. Includes an application to FL with various client sampling protocols, which is related to our motivating problem of non-i.i.d. client sampling. These client sampling protocols are required to be unbiased.
(d) Grudzien et al, 2023: Combines local training, client sampling, and communication compression to accelerate convergence in terms of condition number and dimension (strongly convex problems). Similarly to [Tyurin et al, 2023], the client sampling schemes from this paper are required to be unbiased.
(e) Patel et al, 2022: This paper provides lower bounds for distributed non-convex, stochastic, smooth optimization with intermittent communication, both in the full and partial participation settings. They also include algorithms employing variance reduction which match (or closely match) lower bounds in the full and partial participation settings. These lower bounds are generally important to understand the limits of optimization under i.i.d. client sampling.
In summary, these works are all relevant for FL in general. With respect to client sampling, (a) applies to full participation, (b) and (e) apply for i.i.d. client sampling, and (c) and (d) apply to unbiased sampling. The analysis of these papers cannot be directly applied to our setting of periodic participation, and our work aims to fill this gap by providing an algorithm with non-convex convergence guarantees even with non-i.i.d. client participation. We will cite and discuss these works in our updated paper.
2. **Different heterogeneity assumptions.** We addressed this point in the general response, but we can further touch on it here. The main point is that we do not use any heterogeneity assumption in our paper. The only reason that the condition $\lVert \nabla f_i(x) - \nabla f(x) \rVert \leq \kappa$ is stated in our table is that this assumption is used by the baselines we compare in Table 1. Our analysis avoids any heterogeneity assumption because our use of control variates eliminates the need for one (this is also the case in the SCAFFOLD analysis). We agree that bounded gradient dissimilarity is less restrictive than uniformly bounded heterogeneity, but even less restrictive is no heterogeneity assumption at all!
3. **Complexity of assumption 2.** We agree that Assumption 2 is somewhat dense, although we have already included in Section 3.2 a detailed discussion of different sampling schemes which satisfy assumption 2. If you have additional suggestions to improve the clarity of this assumption, please let us know.
4. **Explanation of hyperparameters.** As stated on Line 269, all of the hyperparameters for every baseline are tuned with grid search. The search ranges and final values for each hyperparameter are given in Table 2 of Appendix D. To evaluate each hyperparameter combination in the grid, we run the training setup described in the main body (with only one random seed instead of five) and choose the hyperparameter combination that reaches the smallest training loss by the end of training. After each hyperparameter is tuned, we evaluated each algorithm over five random seeds.
5. **Recovering SCAFFOLD's rate.** You are correct that our convergence rate as a slightly worse dependence on $N/S$ than SCAFFOLD. We pointed this out in Section 4.3 (line 223), and we provide a detailed explanation of this discrepancy in Appendix E. Essentially, there is a potential small issue in the analysis of SCAFFOLD for the case of partial participation. Instead of repeating this issue in our analysis, we accept a slightly worse dependence on $N/S$. It is likely that the issue can be fixed to recover the $(N/S)^{2/3}$ rate for both their algorithm and ours, but this is not the focus of our paper. In this paper, we focused on achieving convergence under periodic participation with reduced communication, linear speedup, and resilience to heterogeneity, and by doing so we have improved over previous work. If the $N/S$ dependence is an important issue to you, please read our discussion of the details, which is 1 page in Appendix E (lines 807-829).
---
Rebuttal Comment 1.1:
Comment: Thank you again for reviewing our paper. In our rebuttal, we responded to the points from your review. Of particular importance is our discussion of the different heterogeneity assumptions (#2) and a comparison with SCAFFOLD's rate in the i.i.d. case (#5). Also, we included a discussion of new experimental results in the general response. Please let us know if we have addressed your concerns. We are happy to continue discussion.
Best, Authors
---
Rebuttal 2:
Title: Response to authors
Comment: Thank you for your rebuttal!
You have addressed my concerns, and as a result, I will be increasing my score accordingly.
Let me clarify the concerns I had and highlight the aspects that addressed them:
1) Thank you for providing a discussion of related work. This aspect is resolved.
2) Let me clarify the aspect of the heterogeneity assumption. I agree that in your results, you do not use the heterogeneity assumption. This is straightforward since you use a client drift reduction mechanism, such as SCAFFOLD. The idea behind the SCAFFOLD mechanism is to allow a method to work without the heterogeneity assumption, and this is clear.
However, I must mention that when you provide information about previous results, you must be precise in describing the details and assumptions made in those results. This helps prevent confusion for readers, as they are not expected to read all the papers in the field. I requested clarification on the aspects related to the table.
3) Under Assumption 2, you briefly described what conditions (a), (b), and (c) mean, with basically one sentence for each. I suggest providing more detailed explanations. For example, why did you specifically use equal frequency in condition (b)? Additionally, there is no mention of condition (d). While I understand the meaning of this assumption, it is not easy to grasp with such a compressed explanation. In Section 3.2, you provide examples, which are valuable, but the examples do not fully explain the assumptions themselves.
4) Let me clarify this aspect once again to avoid any confusion. I understand that you used a grid search for step sizes. What I meant is that the selection of client sampling parameters—such as the number of groups, availability time, communication interval, and number of communication rounds—is not described. These could also be considered hyperparameters. To avoid confusion, let us refer to them as experimental setup parameters. All I asked for was some explanation of how these parameters were selected. In Section D.1, these parameters are simply stated without clarification (lines 756-762).
5) I did indeed miss the explanation in Appendix E. Thank you for pointing this out. This clarification is indeed valuable. I recommend mentioning it earlier in the text (not at the end of the entire paper).
Since aspects 1 and 5 are fully resolved, I have increased my score. | Rebuttal 1:
Rebuttal: Thank you to all of the reviewers for your time and effort in the review process. Here we describe additional experimental results that we have added to address the reviewer comments, and give answers to common questions. We have also responded individually to each review below.
1. **Experiments with additional baselines.** For the FashionMNIST and CIFAR-10 experiments from the original paper, we have evaluated four additional baselines: FedAdam [28], FedYogi [28], FedAvg-M [Cheng, 2023] (see citation below), and Amplified FedAvg + FedProx [34, 19]. The results are shown in Figures 1 and 2 of the 1-page PDF, where you can see that **Amplified SCAFFOLD maintains superior performance against all of the added baselines in terms of both training loss and testing accuracy.**
We tuned the hyperparameters of all baselines according to the hyperparameter ranges suggested in the original paper of each algorithm, and we allow the same compute budget for tuning each baseline as we did for tuning the algorithms in the original paper, in terms of the total number of hyperparameter combinations evaluated. Also, the results are averaged over five random seeds.
For FashionMNIST: FedAdam and FedYogi reach moderate training loss quickly, but are soon overtaken by Amplified SCAFFOLD and later by SCAFFOLD. FedAvg-M exhibits a minor advantage over FedAvg, but performs about the same as Amplified FedAvg. Amplified FedProx (i.e. Amplified FedAvg with FedProx regularization) performs nearly identically to Amplified FedAvg.
For CIFAR-10: FedAdam is more competitive, but is still outperformed by Amplified SCAFFOLD. FedYogi and FedAvg-M are further behind, though both still outperform SCAFFOLD. Amplified FedProx is again nearly identical to Amplified FedAvg.
These new experiments demonstrate that **Amplified SCAFFOLD outperforms strong empirical baselines (FedAdam, FedYogi) under cyclic client participation**, reinforcing the empirical validation of our algorithm. This performance is consistent with the fact that Amplified SCAFFOLD has convergence guarantees under periodic participation, while FedAdam and FedYogi were not designed for settings beyond i.i.d. client sampling.
2. **Experiments with different participation strategies.** We also added an evaluation under another non-i.i.d. participation pattern. Figure 3 of the 1-page PDF shows the evaluation of all baselines (including the additional baselines from \#1) for CIFAR-10 under this new participation pattern, which is described in the next paragraph. The results show that **Amplified SCAFFOLD outperforms all baselines under a second non-i.i.d. participation pattern**.
We refer to this new pattern as Stochastic Cyclic Availability (SCA), and it models device availability which is both periodic and unreliable. Similarly to cyclic participation, the set of clients is divided into $\bar{K}$ groups, and at each round one group is deemed the "active" group, while the others are inactive. Unlike cyclic participation, in SCA not every client in the active group is always available: Instead, when a group becomes active, the clients in that group become available for sampling with probability $80\%$, while clients in inactive groups have probability $5\%$ to be available for participation. The active group changes every $g$ rounds. This stochastic availability models the real-life situation where a client device can be unavailable at a time of day when it is usually available, or vice versa. In this way, SCA is more flexible than cyclic participation and better captures the unreliability of client devices. Lastly, we reused the remaining settings ($g, \bar{K}, I$, etc.) and the tuned hyperparameters for each baseline from the CIFAR-10 experiment under cyclic participation. Again, we average each algorithm's performance over five random seeds.
Results for CIFAR-10 under SCA participation are shown in Figure 3. Again, **Amplified SCAFFOLD outperforms all baselines under SCA participation**. The relative performance of each baseline is similar, with FedAdam staying competitive with Amplified SCAFFOLD, followed by FedYogi and FedAvg-M. The remainder of the baselines have significantly worse performance, and again Amplified FedAvg has not benefitted by adding FedProx regularization.
This new experiment shows that **Amplified SCAFFOLD performs well in other non-i.i.d. participation patterns beyond cyclic participation.** We will include these new results in the updated version of the paper.
3. **Meaning of assumption 1(a).** Several reviewers pointed out that Assumption 1(a) is too strong or counterintuitive. This confusion is actually just due to a typo in the statement of Assumption 1(a). The correct version should say that $f(x_0) - \min_{x \in \mathbb{R}^d} f(x) \leq \Delta$, which is standard for convergence proofs.
4. **Heterogeneity assumptions.** Two reviewers asked whether the heterogeneity assumption $\lVert \nabla f_i(x) - \nabla f(x) \rVert \leq \kappa$ is used for our proof, and whether this assumption is reasonable in our setting. In short, our proof does not use this assumption or any assumption on the heterogeneity; as stated in Theorem 1, the only assumptions we need are Assumption 1, Assumption 2, and Equations (1) and (2) (both only depend on the client sampling distribution). The heterogeneity assumption with $\kappa$ is only stated in Table 1 because it is used by several baselines. Similarly to SCAFFOLD in the case of i.i.d. sampling, we can avoid this assumption through the use of control variates to approximate the global gradient at each local step.
[Cheng, 2023] Cheng, Ziheng, et al. "Momentum Benefits Non-iid Federated Learning Simply and Provably." The Twelfth International Conference on Learning Representations.
Pdf: /pdf/51305974c3d3ba0d5080d33ab60a9b190dc4cb15.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Subwords as Skills: Tokenization for Sparse-Reward Reinforcement Learning | Accept (poster) | Summary: This paper presents a method which discretizes action trajectories from offline data into skills through a BPE-inspired tokenization method. These discrete action trajectory skills are given directly to a high-level agent to utilize for solving downstream tasks.
Strengths: **Presentation:** Writing is clear and the method is intuitive and simple, the paper is overall easy to understand. The main figure also does a good job of distilling the algorithm into a simple illustration.
**Method:** The method is extremely compute-efficient in skill creation. It’s also faster when using for downstream RL because the action sequences are unique skills, no forward passes are needed.
**Results:** Overall, results are convincing in that the method works on par with or better than other non-state conditioned skill-based RL methods.
**Experiments:** The authors performed a good amount of ablations and the experiments are performed across a few locomotion, manipulation, and discrete procgen settings that seem pretty comprehensive for an RL paper.
Weaknesses: **Method:**
- One limitation of this method is that it *cannot be state conditioned*. Figure 5 clearly demonstrates that state-conditioned methods that can utilize state-based priors (e.g., SPiRL) can perform much better depending on the problem setting (Franka Kitchen, where there is a lot of overlap between the pre-training data and testing environment). On the other hand, methods like SPiRL can also be used without state-conditioning. This thus is less flexible in this regard.
- Another limitation that stems from the comparison in the introduction against large language models and NLP is that in NLP, associations between text tokens are learned during pre-training time and then utilized for downstream tasks. The authors’ method does not utilize this intuition despite being influenced by NLP. Instead, the policy must learn how to associate the action sequence tokens from scratch during RL training time, possibly wasting many environment steps just learning the associations. One way to fix this would be to learn some state-conditioned associations between skills/some prior over how to associate discrete action sequences together.
Technical Quality: 4
Clarity: 3
Questions for Authors: - Why choose a specific desired length $L=10$ for the action sequences? A more flexible method would be able to use skills of varying lengths to better solve the task like https://openreview.net/pdf?id=r4XxtrIo1m9 or the already cited CompILE work by Kipf et al. It seems like the proposed method could easily be adapted for variable length skills too.
- Why are the ablations performed on Hopper (appendix D + E) but no Hopper experiments are in the main paper?
- Because this came out ~3 months before neurips submissions, it seems reasonable the authors did not cite it yet. But, this paper https://arxiv.org/abs/2402.10450 is extremely similar in the use of BPE for creating skills to use for downstream policy learning. How does your method compare to this?
As such, when using the NeurIPS rating scale, I'm currently giving this paper a 6 because I believe it is of moderate impact (results are adequate, method is nice and simple, but I'm not sure it would be considered high-impact, i.e. a 7, and there are some flaws and remaining questions).
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Sufficiently addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your care in reading our paper. We appreciate the positive feedback on the clarity, and the comment on efficiency and the strength of the results, including a “pretty comprehensive” evaluation. We respond to your concerns below.
1. Method cannot be state-conditioned
Though it is true that we present an unconditional method, it is completely possible to adapt into a state-conditioned method. After discovering skills and tokenizing the demonstrations via BPE, one can take all the observation-action pairs that correspond to, say, 3-action prefixes of our skills, and train a classifier to map from observations to one of these prefixes. Then, one can bias the sampling during RL toward skills whose prefix is more likely under the prior, in exactly the same way that SPiRL does. We did attempt to prototype this method during the rebuttal, but it was not possible to properly tune and compare to prior work due to time constraints. We do consider it interesting for future work.
2. Policy must learn association.
This is intentional. Our goal here is to drive strong exploration, even when the collected demonstration data is small or out of domain (Figure 6). Having a strong skill prior requires lots of in-domain data collected, which is the language modeling case. That being said, we agree that learning a prior is a reasonable solution in such settings. See above for how we might go about doing so.
3. Why choose a specific skill length?
This choice was made to make comparisons to baselines fair. We agree that BPE is perfectly suited to discover skills of different lengths which would be more flexible and makes more sense.
4. Why are ablations on Hopper?
One of these ablations explores the effect of discretization in a task where we might expect it to matter more as hopper only has a single contact point. The sparse-reward tasks explored in the main text are quite resilient to discretization (Ant and Franka are stable) so it made less sense to do this for those tasks. For the ablation on data quality, D4RL does not provide quality splits for the Ant and Franka tasks, and we could try to synthesize quality splits by combining with random data, but this seemed like a poor proxy. Hopper provides these splits.
5. Related work?
Thank you for pointing out this very related work and concurrent work, we will be sure to add a citation! It appears they have a similar use of BPE, but their goals are different: they discover BPE skills in the offline setting and then test the ability to generalize to downstream tasks with additional finetuning, still in the offline setting. Nowhere do they concern themselves with exploration or the online setting, and they use the entire vocabulary without pruning, which would be impossible for us as we show.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal, my questions are answered and I'll be raising my score to a 7.
---
Reply to Comment 1.1.1:
Comment: We are glad that our response was able to answer your questions and change your opinion of the paper for the better. Thank you for the engagement and careful read. | Summary: This paper uses Byte pair encoding to create a discretised action space for RL from demonstrations. The authors show that Byte pair encoding can:
* improve exploration in sparse reward settings
* creating the skill action space is computationally cheap compared to methods that train deep learning models
* their approach is not conditioned on observation, which improves generalisation
Strengths: The authors present a clear explanation of how (as far as I know) novel method for extracting skills from demonstrations. What is really cool is that you only to run k-means clustering, making this approach scale favorably to other approaches that use deep learning.
Also give the success of BPE in NLP, this is a very intuitive idea that seems like a great contribution to the RL community.
The ablations answered most of my questions about the design of their method.
Weaknesses: It is difficult for me to understand the baselines and some details of the experiments. For example, its not clear how many demonstrations are used for each of the methods. Presumably as it only using k-means and BPE, it can work with a small number of demonstrations (this is hinted at on the skill transfer results). I assume for example, that offline RL methods cannot be used because the dataset of demonstrations is too small? It would be good to explain these basics to someone (like me) who is not familiar with this subfield.
Also, from the paper alone, I cannot tell whether the baselines are the state of the art approach for skill extraction.
To help with these weaknesses I would like:
* details on the number of demonstrations used
* more explanation of what the baselines are and how they work
For the transferring skills, it would be good to run the baseline (even if it is very poor) to plot in figure 6.
Technical Quality: 3
Clarity: 2
Questions for Authors: How important is using K-means clustering before the byte pair encoding?
Could use simpler disrectisation approaches (e.g. like the one used in the continuous control experiments here: https://arxiv.org/abs/2107.05431)
They mention filtering demonstrations
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The paper does not have an explicit limitations section, which I think it could benefit from. For example, I would like to see a discussion of how this approach compares to settings where there is a very large amount of data to train offline RL agents. How does it compare then? In general, I would like to understand when this approach is the most suitable given data and simulator constraints.
I would also like to understand a little better how far this approach can go. For example, are there tasks at which point the vocab becomes filled very quickly and this approach is not applicable?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and care in reviewing our paper. We appreciate your feedback and share your excitement regarding the intuitiveness of the method and the computational efficiency. We answer your questions below.
1. How many demonstrations are used?
For these tasks we use the existing D4RL dataset, and we give details on demonstrations in Appendix A. As to the quantity, we use ~1000 demonstrations for the AntMaze results, ~600 for the Kitchen results and ~100 for CoinRun. These numbers are defined (except in the case of CoinRun which we collect) by practices in prior work that we compare against. In Figure 6 we explore subsampling.
2. Can offline methods be used?
D4RL was a dataset collected for offline RL, but the goal of our work is orthogonal. Even though offline RL might be used for these tasks, it assumes access to the observations, and the reward labels, whereas we only require the action sequences. In addition, because offline RL requires access to this extra information, it will memorize the particular layout of the scene in order to accomplish the task, and if the layout is rearranged, as in the transfer experiment, then new demonstrations and reward will have to be collected in the new layout. Thus, the use of offline RL is predicated on the availability of in-domain data, which is the opposite of exploration.
3. Are baselines SOTA?
The baselines used are strong methods for unconditional learning, though the benchmarks tested in the literature are rarely shared. Our goal in this work is not only to compete for SOTA, but additionally to point out that a relatively simple and inexpensive method can compete with very expensive neural-network based solutions. In particula, these NN-based solutions are so expensive that even running experiments takes substantially longer. The baselines are chosen to represent two common classes seen in the literature: models that generate subsequences with VAEs (SSP) and sequence models as priors (SFP).
4. How important is $k$-means before BPE?
Without some kind of discretization step it would be impossible to run BPE as we need to merge the most common pair among a discrete set. k-means was chosen as it is a very standard way to discretize.
5. Could we use simpler discretization through binning?
We did think about this possibility, but binning would result in an exponentially large number of discrete units (with the action space). This means that there could be a very large number of subwords discovered, which would result in a necessarily large final vocabulary. Such a large vocabulary makes RL difficult as we see in the paper. In addition, most of these discrete units discovered might not actually be necessary. In the example of the Ant, every possible motion of every leg joint is not necessary to make the Ant move, just the coordinated motions of lifting each leg up and down as a whole.
6. Missing limitations.
We do address limitations in the conclusion section, but found it difficult space-wise to include a separate heading. We do not believe the comparison to offline RL is applicable as it is an orthogonal problem with different aims. Our goal is to improve exploration even when in-domain task data is not already available.
7. When is this method not applicable?
Thanks for the question. We discuss this in the Conclusion section where we mention several limitations. In particular we speak about discretization removing resolution, which might be necessary for very fine-grained motor skills, and the open-loop execution, which would have disadvantages in stochastic environments. We believe some of these issues are addressable in the future.
---
Rebuttal Comment 1.1:
Comment: We wanted to reach out to see if you had any remaining questions given that the discussion period wraps up in a few hours.
Thank you again for your positive feedback. | Summary: This paper presents a novel method for skill discovery in reinforcement learning by leveraging tokenization techniques from Natural Language Processing (NLP). The approach involves discretizing the action space through clustering, and then using byte-pair encoding to generate temporally extended actions. The method is evaluated on various environments including AntMaze, Kitchen, and CoinRun, demonstrating improved performance over some state-free skill learning baselines and vanilla SAC.
Strengths: This paper has several strengths:
- The perspective of using NLP tokenization techniques for skill discovery in RL is creative and innovative.
- The method outperforms baselines in several sparse-reward environments.
- The study examines various aspects beyond performance, including computational efficiency, exploration behavior, and domain generalization.
Weaknesses: While the paper demonstrates several strengths, there are also potential limitations and areas for improvement:
- The range of baseline skill discovery methods could be expanded. Additional algorithms, such as those proposed in [1] and [2], merit consideration.
- The presentation requires refinement. The current format of tables and figures impedes clear interpretation, potentially obscuring the full efficacy of the proposed method.
- The effectiveness of the skill discovery process appears to be heavily contingent on the specific dataset utilized. A more comprehensive exploration of dataset variability and its impact on outcomes would strengthen the study.
- There is no formal guarantee that the discovered skills are sufficient to construct an optimal policy. This theoretical limitation warrants acknowledgment and discussion.
- The computational complexity of the merging process may prove prohibitive for high-dimensional input domains, potentially limiting the method's scalability.
- It is unclear how the method would perform with larger, more diverse datasets that might require a larger skill space. This leaves questions about the approach's generalizability unanswered.
[1] Y. Jiang, E. Z. Liu, B. Eysenbach, Z. Kolter, and C. Finn, “Learning options via compression,” arXiv preprint arXiv:2212.04590, 2022.
[2] A. Singh, H. Liu, G. Zhou, A. Yu, N. Rhinehart, and S. Levine, “Parrot: Data-driven behavioral priors for reinforcement learning,” arXiv preprint arXiv:2011.10024, 2020.
Technical Quality: 2
Clarity: 2
Questions for Authors: See the "Weaknesses" section.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors have discussed a few limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper. We appreciate that you highlighted the creativity of the approach as well as the breadth of the study. We respond point by point to your concerns below.
1. Range of baselines
We would be happy to cite and discuss these methods in the paper. LOVE [1] is similar in its aims to our method, compressing sequences into reusable parts, but requires observation conditioning, which makes it hard to directly compare to our method. As discussed in Section 4.3, observation conditioning requires in-domain data collection and has tradeoffs with exploration. Parrot [2] is very similar to SSP or OPAL in that it is a flow model over actions, but instead of being over chunks of actions it is only over a single action at a time, so it would be as inefficient to run in practice as SFP, thus we thought SSP would be a better choice. The code is also not available.
2. Presentation requires refinement.
It would be helpful to understand where precisely the issues are with the presentation so that we may amend them.
3. Dataset-specificity of skills
To test the specificity of the dataset required to generate skills we provided experiments in the Hopper task in Appendix E, where we found indeed that quality was not so important, and we also subsampled the data in Figure 6, which suggests that only a very small number of decent demonstrations are necessary. If this is referring to the choice of tasks, we already choose more challenging tasks than prior methods.
4. No formal guarantee of optimality
We agree that there is no formal guarantee of optimality and many skill-learning papers do not provide any such guarantees, including the baselines we consider. We already mention this limitation in lines 131-136, but we would be happy to expand this discussion if it is not sufficient.
5. Computational complexity of the merging process in high-dim
Perhaps the presentation was unclear. In particular our merging process first consists of running k-means on actions and assigning them to their closest cluster centers. Then, we run BPE on the derived "strings". As such, the computational complexity of the merging process in high dimensions will only affect the k-means step, which indeed is more expensive for high-dimensional input, but still very very efficient on existing hardware for 1000-dimensional input spaces. The BPE step is on discrete units, so it will be fast regardless of the input dimension, certainly when compared to training neural networks. We hope this clarifies.
6. Unclear how method performs with larger, more diverse datasets
RL with a large action space is difficult, but so is RL with an unconditional skill space that can perform many different behaviors. As such we don't believe this is any more difficult with our method than existing methods. We believe such studies are out of scope given the current large batch of experiments, but agree it would be interesting to follow up. Certainly our method would be much faster to test on larger datasets when compared to prior work given the speedups.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I will maintain my original score.
---
Reply to Comment 1.1.1:
Comment: We made a concerted effort to address each of the issues that the reviewer raised and, while we respect the reviewer’s opinion, we would appreciate a justification for the decision. In our rebuttal we address all concerns and, with all due respect, we find a number of them to be broad or vague.
As an example, when the reviewer says “the presentation requires refinement” without details as to specific changes desired, it is in direct contradiction to all other reviewers who praised the clarity of the presentation, thus it would be helpful to make specific comments.
In addition, the comment that “computational complexity... may prove prohibitive in high dimensional input domains” is factually inaccurate. The current method scales quite nicely to high-dimensional action spaces as it only relies on K-Means, which has minibatch approximations in high dimensions.
We already make these points in our rebuttal above, along with addressing other concerns. We understand that the reviewer has other commitments, but we would appreciate if they took the time to provide substantive feedback and a justification for their decision. | Summary: The paper titled "Subwords as Skills: Tokenization for Sparse-Reward Reinforcement Learning" introduces a novel method for skill extraction from demonstrations to address the challenge of exploration in sparse-reward reinforcement learning (RL). Inspired by the Byte-Pair Encoding (BPE) algorithm used in natural language processing, the authors propose a method to generate skills that can be used to accelerate both skill extraction and policy inference in RL tasks. The proposed method demonstrates strong performance in various tasks, providing up to 1000× acceleration in skill extraction and 100× acceleration in policy inference. The method also shows potential for skill transfer across loosely related tasks and generates a finite set of interpretable behaviors.
Strengths: - **Originality**
The paper presents a novel approach by adapting the Byte-Pair Encoding (BPE) algorithm from natural language processing to skill extraction in reinforcement learning. This creative application demonstrates originality.
- **Quality**
The method is evaluated on several challenging sparse-reward RL tasks, such as AntMaze and Kitchen environments. The results indicate significant improvements in performance and efficiency compared to existing methods.
- **Clarity**
The paper is well-organized and clearly explains the methodology, experiments, and results. Visualizations and detailed explanations help in understanding the process and the outcomes of the proposed approach.
- **Significance**
The proposed method addresses a critical problem in RL, namely extracting skills from the demonstrations to solve exploration in sparse-reward environments. By significantly accelerating skill extraction and policy inference, the method has the potential to impact a wide range of applications in RL.
Weaknesses: 1. Stochasticity of the Environment
The impact of the environment stochasticity raises concerns. Since the method treats sequences of low-level actions as high-level actions, the cumulative effect of each low-level action can vary significantly in highly stochastic environments. For instance, if there is a 50% chance of wind blowing the agent off course, the resulting states from executing a single high-level action could vary greatly, potentially harming the algorithm's performance. The current evaluation environments, such as AntMaze and Kitchen, seem to have relatively low stochasticity. To strengthen the paper, it would be beneficial to include evaluation results from highly stochastic or procedurally generated environments, such as DMLAB30.
2. Intuitive Explanation of BPE Advantage
The paper would benefit from more intuitive explanations regarding why skills discovered using Byte-Pair Encoding (BPE) can outperform those discovered through simple k-means clustering. While BPE is known for its hierarchical and incremental construction of subwords in language processing, clarifying how these characteristics translate to improved skill discovery in reinforcement learning would enhance the understanding of the proposed method's advantages.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Effect of Stochasticity on Algorithm Performance
How do you think the about stochasticity of the environment affects the algorithm's performance? Given the potential variability in outcomes when low-level actions are aggregated into high-level actions, understanding this impact is crucial.
2. Intuition Behind BPE Benefits
Can you provide some intuition on how BPE can be beneficial for skill discovery in reinforcement learning? Understanding the specific advantages of BPE over simpler methods of forming high-level actions like k-means clustering would help clarify the strengths of the proposed approach.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors acknowledge several limitations of their work:
1. **Resolution Loss Due to Discretization**: The discretization step in the skill extraction process reduces the resolution of the action space, which might not be suitable for tasks requiring fine-grained actions.
2. **Open-Loop Execution**: The method involves open-loop execution of subwords, which can lead to inefficient and unsafe exploration in certain scenarios. This limitation highlights the need for incorporating feedback mechanisms in future iterations of the method.
The paper does discuss these limitations openly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and consideration spent reviewing. We appreciate that you highlight the novelty and significance of the approach, as well as the clarity in presentation and the challenge of the tasks that we choose to evaluate in. We respond to the weaknesses and questions below.
1. Stochasticity of the environment
It is indeed the case that stochastic environments can pose issues, though we would like to clarify how exactly. The skills that we discover are 10 steps long, while the tasks that we set out to accomplish are hundreds of steps long. This means that skills tend to correspond to short sequences (e.g., taking a step or two forward, turning slightly, reaching toward an object), but not completely memorized actions. Thus if the environment layout were to change (as is the case in DMLab30, and in the case of our transfer experiments), it shouldn't pose an issue for our method as long as the policy can learn a generalized map from observation to action (which would be the case if many training environments were given). However, low-level stochastic dynamics would be more problematic and we discuss this in the limitations. We would be happy to expand this discussion. The deterministic environments we test are standard in the literature (including the baselines we use), so we did not consider this to be a deal-breaker. One option to mitigate this issue is to tune a residual policy on top of skills once they are discovered (not dissimilar to the residual correction in Behavior Transformers), though we find this to be out of scope for the current submission.
2. Advantage of BPE over simpler methods
We believe there may be a slight miscommunication happening here, so we apologize for any lack of clarity in the writing. In short, we do not believe it is correct to compare "simple k-means clustering" with BPE for skill discovery as the two have entirely different purposes, and we use the first to seed the second. In particular, we want to discover high-level action *sequences*. Thus in order to use k-means clustering, we will need to define a distance metric over *sequences* of actions, which is tricky to do. One might think to average l2 distance over pairs of actions in different sequences, but if two sequences of scalar actions are shifts of each other, e.g. (a_1, a_2, a_3) = (0, 1, 0) or (1, 0, 1) (i.e., alternating 0 and 1 actions), these would be farther apart than (0, 1, 0) and (0.5, 0.5, 0.5) which are completely different sequences. Thus it becomes difficult to define a "simpler" method operating on sequences. As to why we choose BPE, BPE is the simplest and most common method employed currently in NLP to discover discrete subsequences. It finds the most common subsequences in a greedy fashion, which in demonstration data would correspond to common behaviors useful across many different scenarios. Our insight is to reuse this technique in RL to help in solving the exploration problem, as these reusable behaviors should correspond to reusable skills. As we show in the paper, this method is much cheaper and faster than using sequence models for skill discovery as prior work has done. We hope this addresses any issues in communication and would welcome further questions to help clarify the presentation.
---
Rebuttal Comment 1.1:
Comment: Thank you again for your feedback.
The discussion period ends in a few hours. We would appreciate it if you could share any additional comments/questions you may have in light of our rebuttal above so that we can respond to them.
---
Rebuttal 2:
Comment: Thank you for your responses, and I apologize for the late reply. After reading your rebuttal, I decided to increase the rating from 5 to 6. However, I still believe that exploring a systematic approach to dealing with highly stochastic environments could be an interesting direction.
---
Rebuttal Comment 2.1:
Comment: Thank you for your reply, we appreciate the engagement. We're also happy that our rebuttal was able to change your opinion of the paper for the better. We agree that stochastic environments are an interesting direction, and in particular the idea of learning a residual correction on top of the low-resolution actions is something we may explore in the future. | Rebuttal 1:
Rebuttal: We appreciate the time and care all reviewers have taken in reading our paper and offering feedback, and thank them for their input.
We are particularly happy to see reviewers appreciate the novelty and creativity of the proposed method (bwJh, iKv2), the extreme efficiency of the approach (bwJh, iKv2, g8NK, jRfS), the breadth of the evaluation (iKv2, g8NK, jRfS) and the quality of the presentation (bwJh, g8NK, jRfS).
We address concerns with the current draft in individual rebuttals for each reviewer below. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
OMG-LLaVA: Bridging Image-level, Object-level, Pixel-level Reasoning and Understanding | Accept (poster) | Summary: This work presents a new multi-modal large language model, named OMG-LLaVA. This model builds on previous work OMG-Seg and combines a LLM (LLaVA-like) into one simple framework. Compared with previous MLLMs, this work unifies lots of image-level, object-level, and pixel-level segmentation and reasoning tasks in one shot.
Strengths: 1, The motivation and goal are interesting and ambitious. The proposed architecture is novel and interesting to me. Although built from previous universal segmentation network, the authors show a simple but effective way to connect LLM and dense visual perception model.
2, The proposed method only uses one image encoder and one image decoder, which is clean and easy to follow. Moreover, the authors propose a new connection module, named perception prior module, which also works effective in connecting object query into LLM for further processing.
3, Compared to several works, including PixelLM and GLaMM, OMG-LLaVA has comprehensive functionality but better performance. Moreover, the entire design is simple and elegant.
4, Overall writing is good and is easy to follow.
Weaknesses: Overall, I think this paper is good and elegant, compared with previous combined approach. However, there are several details should be added for the better draft.
1, More detailed designs or ablation should be carried out for perception prior embedding design. The current draft only shows the effectiveness of proposed approach.
2, More detailed discussion on meta-architecture design should be
One question is why you fix the image decoder why not use another decoder with the same architecture but with the same pre-trained weights.
Although this operation can increase the parameter costs, I wonder whether joint sharing one decoder can have mutual effects.
Thus, more detailed experiments on meta-architecture design should be added.
3, No parameter and Gflops analysis. For example, the author claim they only use one image encoder, one decoder and one LLM. Compared with previous combined approach (GLaMM), this advantage is not well explored.
Technical Quality: 4
Clarity: 3
Questions for Authors: See the weakness, I would raise the score if all questions are well solved.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Please see weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. More detailed designs or ablation should be carried out for perception prior embedding design.**
**A**: Thanks for your suggestion. We have added more detailed ablation studies on perception prior embedding strategy. We have conducted ablation studies on different strategies for assigning perception embeddings to each pixel token, and the results are shown in Table R5. The best results were achieved by assigning the corresponding object query to each pixel as the perception embedding to be embedded, using the SoftMax method. The strategy of generating perception embeddings through L1 normalization led to performance degradation, but it still brought a significant performance improvement compared to not using the perception prior embedding strategy.
---
**Q2. More detailed discussion on meta-architecture design should be One question is why you fix the image decoder why not use another decoder with the same architecture but with the same pre-trained weights. Although this operation can increase the parameter costs, I wonder whether joint sharing one decoder can have mutual effects. Thus, more detailed experiments on meta-architecture design should be added.**
**A**: Using another decoder with the same architecture and loaded with the same pre-trained weights to generate segmentation results in a marginal performance improvement, as shown in the table below. Undoubtedly, using an additional decoder can achieve better performance due to the introduction of more trainable parameters. However, the performance gain is marginal, so we chose to adopt a frozen OMG Decoder to achieve a balance between performance and parameters.
| Decoder | | refCOCO | | | refCOCO+ | | refCOCOg | | | GCG | | |
| -------- | ---- | ------- | ----- | ---- | -------- | ----- | -------- | ---- | ------ | ----- | ---- | ---- |
| | Val | TestA | TestB | Val | TestA | TestB | Val | Test | METEOR | CIDEr | AP50 | mIOU |
| unfrozen | 78.0 | 80.3 | 74.1 | 69.1 | 73.1 | 63.0 | 72.9 | 72.9 | 14.5 | 38.5 | 28.6 | 64.7 |
| frozen | 77.2 | 79.8 | 74.1 | 68.7 | 73.0 | 61.6 | 71.7 | 71.9 | 14.5 | 37.5 | 28.9 | 64.6 |
---
**Q3. No parameter and Gflops analysis. For example, the author claim they only use one image encoder, one decoder and one LLM. Compared with previous combined approach (GLaMM), this advantage is not well explored.**
**A**: We have statistics on the parameters of the main components of OMG-LLaVA, such as the LLM and visual encoder, and the results are shown in below table. The visual encoder of OMG-LLaVA is only 0.2B, much smaller than the 0.9B of LISA and GLaMM. The number of vision tokens significantly impacts the computational cost of the LLM. Compared to GLaMM, which uses 576 visual tokens, OMG-LLaVA only requires about 276 visual tokens, which is only 50% of GLaMM.
| Methods | LLM | Visual Encoder | Image Size | Visual Tokens |
| --------- | ---- | -------------------------- | -------------------------- | ---------------------------------------- |
| LISA | 7B | VIT-L(0.3B) \& VIT-H(0.6B) | (224, 224) \& (1024, 1024) | 256 |
| GLaMM | 7B | VIT-L(0.3B) \& VIT-H(0.6B) | (336, 336) \& (1024, 1024) | 576 |
| OMG-LLaVA | 7B | ConvNext-L(0.2B) | (1024, 1024) | 256(Pixel-centric) \& 20(Object-centric) |
---
Rebuttal 2:
Comment: After going through the feedback from other reviewers and the author's detailed replies, I'm more convinced this paper lives up to the NeurIPS standards. I’m especially impressed by the authors’ thorough and detailed response, with the solid experimental results they added. They addressed the key issues I brought up. So, I've bumped my score up to 8, with a clear accept.
Just to sum up my thoughts on this research: The paper introduces OMG-LLaVA, which is a trailblazer in merging three levels of perception into a single LLM system, using just one encoder, one decoder, and one LLM. The way the authors link object queries from a universal segmentation model to the LLM is really smart. It's not just about beating other models like Pixel-LLM and GLaMM—it’s about doing it more elegantly. During the rebuttal, the authors also showed that OMG-LLaVA performs even better with a stronger LLM, pointing to great potential for future work.
At last, the authors should open-source the entire codebase and models (including the stronger LLMs in the rebuttal) for the community soon.
---
Rebuttal Comment 2.1:
Title: Thanks for the response.
Comment: Thanks for your comments and acknowledgement to our work.
We will release the entire codebase, including training, testing, and demo model.
Best regards!
Authors of 8973 | Summary: This paper proposes OMG-LLaVA, an elegant MLLM framework. OMG-LLaVA achieves pixel-level, object-level, and image-level understanding and reasoning with only a perception model and a language model. OMG-LLaVA exhibits more comprehensive capabilities than current MLLMs, such as LLaVA, Osprey, and GLaMM. The experimental results on several benchmarks demonstrate that OMG-LLaVA achieves performance comparable to SOTA methods.
Strengths: 1 The motivation is very clear and easy to follow.
2 The OMG-LLaVA is a simpler and more efficient MLLM framework compared to current MLLMs, and it has broader capabilities such as universal image segmentation and supporting comprehensive visual prompt inputs.
3 The proposed perception prior embedding strategy is crucial and efficient for providing perception results for LLMs. This approach allows LLMs to achieve grounded outputs similar to selecting object tokens from inputs, which is more reasonable and performs better on segmentation tasks than LISA, PixelLM, and GLaMM.
4 OMG-LLaVA unifies the visual prompt embeddings and segmentation embeddings as object-centric visual tokens, effectively shortening the training phases.
Weaknesses: 1 The paper needs to include more ablation experiments to help readers better understand the work. For example, what would happen if the visual projector design shared an MLP for pixel- and object-centric visual tokens?
2 Would the performance of OMG-LLaVA improve with a larger LLM?
3 The authors should include more MLLMs in Table 1 for a more comprehensive comparison.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1 The paper needs to include more ablation experiments to help readers better understand the work. For example, what would happen if the visual projector design shared an MLP for pixel- and object-centric visual tokens?
2 Would the performance of OMG-LLaVA improve with a larger LLM?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors pointed out the limitations of OMG-LLaVA, including performance degradation on image-level tasks and lack of capability for part-level segmentation. The latter can be addressed by replacing the perception model with a more powerful one.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. The paper needs to include more ablation experiments to help readers better understand the work. For example, what would happen if the visual projector design shared an MLP for pixel- and object-centric visual tokens?**
**A:** Table R4 presents the results of ablation studies on the projector. We observed a substantial performance degradation when pixel-centric and object-centric tokens shared a projector. This phenomenon can be attributed to the distinct semantic and perception information encoded in these two types of tokens. Sharing a projector results in detrimental interference between them. Furthermore, our experiments with additional cross-attention layers in the projector did not yield any performance benefits. For the object-centric tokens, specifically visual prompt tokens and object queries, sharing a projector resulted in marginal performance gains for both segmentation and visual prompt understanding tasks.
---
**Q2. Would the performance of OMG-LLaVA improve with a larger LLM?**
**A:** We have conducted experiments with OMG-LLaVA using various LLMs, and the results are presented in Table R3. Our findings indicate that the performance of OMG-LLaVA increases with the strength of the LLM. Due to time limitations, the performance of OMG-LLaVA models based on larger LLMs, such as Yi-34b, will be reported in the coming days.
---
**Q3. The authors should include more MLLMs in Table 1 for a more comprehensive comparison.**
**A:** We appreciate your feedback. To further emphasize the unique capabilities of OMG-LLaVA, we have compared more MLLMs (about 25 MLLMs). The result demonstrates that OMG-LLaVA provides the most comprehensive capabilities while employing a single visual encoder.
---
Rebuttal Comment 1.1:
Title: Concerns Addressed
Comment: Thanks for the further explanation. Please include the supplementary experiments in the final version of the submission. I'll keep my positive rating. | Summary: This paper presents OMG-LLaVA, which facilitates pixel-level, object-level and image-level understanding tasks within a unified framework. Within the OMG-LLaVA, an OMG decoder and perception prior embedding approach are proposed to enhance object-centric comprehension. Comparison with state-of-the-art methods prove the effectiveness of the proposed method.
Strengths: 1. This paper integrates several segmentation tasks and object-level visual comprehension ability within a unified framework. Using a high-resolution image encoder and compressing the visual tokens to save computation resources is reasonable.
2. The design of perception prior embedding is interesting which can integrate object-level cues with visual tokens.
Weaknesses: 1. The proposed method is not very elegant as too many designs customized for different tasks are introduced in the framework. For different tasks (e.g., image captioning, image segmentation and visual prompt based comprehension), different workflows should be conducted.
2. Some important technical details are omitted. For example, to encourage different object queries to attend to different objects, hungarian matching algorithm should be applied to assign labels for each individual object query, especially for COCO segmentation where multiple objects are supposed to be simultaneously segmented. Besides, to generate masks from object queries, the mask decoder should be carefully designed to generate high-quality masks, however, this paper only mentioned a simple FFN is utilized. Is it reasonable? More details should be provided.
3. Some important experimental results are not provided. For example, experimental results on prevalent VQA benchmarks like MMBench, MME, SEEDBench, etc. should be provided to illustrate the general-purpose conversational capability of OMG-LLava. Whether introducing these segmentation data can hurt the VQA capabilities of the model? Overall, only segmentation and simple captioning qualitative results are reported, which are much less than the supported functions of the proposed OMG-LLava.
Technical Quality: 3
Clarity: 3
Questions for Authors: Since comprehension capabilities ranging from image-level to object-level are established, does OMG-LLava possess composite capabilities such as visualizing the intention of LLM during a conventional conversation in the form of segmentation masks? This is very helpful of understanding the working mechanism of LLMs.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. The proposed method is not very elegant as too many designs customized for different tasks are introduced in the framework.**
**A**: OMG-LLaVA boasts an elegant model architecture and more streamlined workflows across tasks than GLaMM. For instance, in GLaMM, visual prompt encoding and segmentation mask decoding employ entirely different workflows and models. However, in OMG LLaVA, visual prompt encoding and segmentation mask decoding are unified into a single OMG Decoder to generate object-centric queries. Maintaining a consistent workflow while unifying many tasks is extremely challenging and currently difficult. For example, mask decoding and visual prompt encoding are hard to implement solely with LLMs without relying on other modules. We believe that in the future, diverse tasks will be unified under elegant architectures and shared workflows, and OMG-LLaVA is a significant step toward this goal.
---
**Q2. Some important technical details are omitted. For example, to encourage different object queries to attend to different objects, a Hungarian matching algorithm should be applied to assign labels for each individual object query, especially for COCO segmentation, where multiple objects are supposed to be simultaneously segmented. Besides, to generate masks from object queries, the mask decoder should be carefully designed to generate high-quality masks, however, this paper only mentioned a simple FFN is utilized. Is it reasonable? More details should be provided.**
**A**: There is a misunderstanding. The segmentation ability is derived from the frozen perception model (OMG-Seg), not the LLM's output. OMG-Seg is a component of OMG-LLaVA, so there is no need for the LLM to reiterate these perception results.
Secondly, the mask decoder of OMG-LLaVA is not a simple FFN layer. The last layer's hidden states of the [SEG] token first pass through a text projector (an MLP layer) and then are fed into the frozen mask decoder (OMG-Seg head) to obtain segmentation masks. During training, the mask decoder remains frozen while the text projector is trained. Since the perception results have already been embedded into the LLM's input via perception prior embedding, generating segmentation queries is not difficult. As shown in Table R4 in our PDF file, we also experimented with adding extra cross-attention layers in the text projector, but the performance was similar to using a simple MLP.
---
**Q3. Some important experimental results are not provided. For example, experimental results on prevalent VQA benchmarks like MMBench, MME, SEEDBench, etc. should be provided to illustrate the general-purpose conversational capability of OMG-LLava.**
**A**: We have evaluated the performance of OMG-LLaVA on multiple image-level benchmarks, including MMbench, MME, SEED-Bench, POPE, and AI2D. The results are presented in Table R2. When jointly co-trained with both image-level and segmentation-based data, OMG-LLaVA significantly outperformed LISA and GLaMM on all image-level benchmarks. However, co-training with segmentation-based data significantly decreases performance on image-level benchmarks, although OMG-LLaVA can mitigate this issue to a large extent through perception prior embedding. We also evaluate the performance of OMG-LLaVA trained solely on the LLaVA dataset on image-level benchmarks. OMG-LLaVA outperformed LLaVA 1.5 on MME, SEED-Bench, POPE, and AI2D by 41, 3.0, 3.0, and 5.1 points, respectively.
---
**Q4. Since comprehension capabilities ranging from image-level to object-level are established, does OMG-LLava possess composite capabilities such as visualizing the intention of LLM during a conventional conversation in the form of segmentation masks? This is very helpful of understanding the working mechanism of LLMs.**
**A**: As shown in Figure R6, OMG-LLaVA can visualize LLM intentions through segmentation masks. While the ability to visualize LLM intentions in diverse conversations was not explicitly included in OMG-LLaVA's training data, the model learned this capability by training on referring expression segmentation data and image-level conversation data. This ability is crucial for AI assistants, as it significantly enhances the user experience.
---
Rebuttal Comment 1.1:
Title: Reply to authors rebuttal
Comment: **R1: Framework Design**
I disagree with the author's claim that *"mask decoding and visual prompt encoding are hard to implement solely with LLMs without relying on other modules"*, because there are already related works that handle mask decoding and visual prompts using the LLM alone. Specifically, VisionLLM [1] transfers polygon masks into a sequence of points sampled from their contour as LLM's learning target. ViP-LLaVA [2] renders the visual prompts in the image, and feeds the image into the LLM without any visual prompt encoding techniques for training.
[1] Wang W, Chen Z, Chen X, et al. Visionllm: Large language model is also an open-ended decoder for vision-centric tasks[J]. Advances in Neural Information Processing Systems, 2024, 36.
[2] Cai M, Liu H, Mustikovela S K, et al. ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 12914-12923.
**R2: Technical Details**
My major problem is how the set of learnable object queries in the OMG decoder can automatically capture distinct object regions? As a comparison, each object query in the DETR should be assigned a unique GT box as the supervision. This is crucial in establishing the dense instances perception capability.
Since OMG-Seg, a powerful foundation model that can solve universal segmentation tasks, is adopted as the mask decoder, the expression that *"the object queries can be decoded into segmentation masks and object categories via a simple FFN layer"* is somewhat misleading.
**R3: VQA Results**
Thanks for authors' efforts, I have another follow-up question about the provided VQA results.
1. What makes the OMG-LLaVA perform better than LLaVA-v1.5 using the same training data?
**Follow-up questions:**
Now that the OMG-LLaVA has object-level perception ability, can OMG-LLaVA tackle traditional task like object detection?
---
Reply to Comment 1.1.1:
Comment: Thanks for your efforts and quick response.
> **R1: Framework Design**
We apologize for the previous imprecise statement. We would like to rephrase it as "**Fine-grained** mask decoding and **flexible** visual prompt encoding are challenging to implement solely with LLMs without relying on additional modules."
While we acknowledge that VisionLLM and VIP-LLaVA can respectively achieve mask decoding and visual prompt encoding without external modules, **these approaches have significant limitations**. For instance, VisionLLM decodes N contour points similar to PolarMask[1] to generate segmentation results, **but contour-based representations inherently suffer from inaccuracies** when compared to binary masks for precise object segmentation. This makes it hard to segment objects with hole. OMG-LLaVA adopt query-based baseline, OMG-Seg as baseline, which can avoid this issue. VIP-LLaVA encodes visual prompts by drawing them onto the image, **requiring all visual prompts to be provided before the dialogue**, limiting the flexibility to add new prompts during the conversation. OMG-LLaVA, on the other hand, leverages the OMG decoder to produce high-quality binary mask segmentation results and flexibly decode user-input visual prompts during the conversation.
[1] Xie E, Sun P, Song X, et al. Polarmask: Single shot instance segmentation with polar representation. CVPR 2020.
***
> **R2: Technical Details**
OMG-LLaVA employs a pre-trained OMG-Seg model as its visual encoder. Similar to DETR, OMG-Seg utilizes learnable queries and has been pre-trained on the COCO dataset. The training of OMG-Seg undoubtedly requires the Hungarian matching algorithm to assign labels akin to DETR. However, in the training of OMG-LLaVA, OMG-Seg is kept frozen, eliminating the need for training and Hungarian matching. Instead, it performs the inference process of panoptic segmentation. The dense instance perception capability is derived from the pre-trained OMG-Seg.
The statement "the object queries can be decoded into segmentation masks and object categories via a simple FFN layer" is a detail describe of the OMG-Seg model and may lead to some misunderstandings. In OMG-LLaVA, the hidden states of the [SEG] token are transformed into initial object queries through an MLP projector. Subsequently, these initial queries are fed into the OMG decoder, where they interact with image features via self- and cross-attention layers to produce final object queries. Within the OMG decoder, the final object queries are passed through a simple FFN layer to obtain segmentation masks and object categories. We will rephase this part in the next draft.
***
> **R3: VQA Results**
When trained exclusively on the LLaVA dataset, the primary distinction between OMG-LLaVA and LLaVA 1.5 lies in the perception prior embedding. Despite employing the same LLM, an identical MLP projector, and similar CLIP backbones (ConvNext-L for OMG-LLaVA and ViT-L for LLaVA 1.5), the incorporation of perception results as input to the LLM enables a more profound understanding of images. Analogous findings have been reported in Vcoder [2] and SeTok [3].
[2] Jain J, Yang J, Shi H. Vcoder: Versatile vision encoders for multimodal large language models. CVPR 2024.
[3] Wu S, Fei H, Li X, et al. Towards Semantic Equivalence of Tokenization in Multimodal LLM. ArXiv, 2024.
***
> R4: Now that the OMG-LLaVA has object-level perception ability, can OMG-LLaVA tackle traditional task like object detection?
OMG-Seg is capable of handling both instance segmentation and panoptic segmentation, and bounding boxes can be easily extracted from the generated binary masks. As OMG-LLaVA incorporates a pre-trained OMG-Seg, it inherently possesses all of OMG-Seg's capabilities, such as object detection. OMG-LLaVA achieves 44.5 mAP on the COCO instance segmentation benchmark and 45.8 mAP on the COCO object detection benchmark.
In OMG-LLaVA, current dense segmentation tasks such as panoptic segmentation rely solely on OMG-Seg and do not utilize the LLM. We think that training the LLM to reproduce the perception results already output by OMG-Seg is unnecessary and meaningless, even though it could be achieved through simple training.
---
Reply to Comment 1.1.2:
Title: Please let us know whether we address all the issues
Comment: Dear reviewer,
Has our response addressed your questions? If you have any other questions or feel that some issues have not been adequately answered, please let us know. If not, we kindly ask you to consider raising your score.
Thank you | Summary: This work proposes, OMG-LLaVA, a unified framework for image-level, object-level, and pixel-level vision-language comprehension. In particular, OMG-Seg, a universal image segmentation model, is integrated with a LLaVA-like multimodal large language model (MLLM), so that various image-level (e.g., image captioning, visual question answering), object-level (e.g., promptable segmentation, region captioning), and pixel-level (e.g., referring expression segmentation, grounded conversation generation) vision-language tasks can be performed. By using one unified visual encoding and decoding module for objects and pixel masks, OMG-LLaVA poses a simple design and shows competitive performance compared with prior models.
Strengths: 1. Overall, OMG-LLaVA’s architecture design is more unified and concise when compared to prior models like LISA and GLaMM where more than one visual encoder is applied.
2. OMG-LLaVA is able to perform a wide range of vision-language comprehension tasks with one model.
3. The method description is generally clear and easy to follow.
Weaknesses: 1. [Performance] Although this work claims OMG-LLaVA as a generalist model, it cannot outperform prior state-of-the-art models on any specific task. This performance difference would weaken the applicability. In Table 3, the performance should also be compared with GLaMM (which has significantly better performance than LISA).
2. [Technical contribution] Overall, OMG-LLaVA seems a direct integration of OMG-Seg and LLaVA. The most novel design in this work seems to be the perception prior embedding, which enables the connection between OMG-Seg and the object visual tokens in LLaVA. The technical contribution is thus somewhat limited. It would be great if the authors could better discuss the novel technical contributions of this work.
3. [Image-level tasks] Although this work claims OMG-LLaVA can unify image-level, object-level, and pixel-level reasoning and understanding tasks, there is no quantitative evaluation in terms of image-level tasks such as VQA.
4. [Frozen perception module] In OMG-LLaVA, the “universal perception module” (OMG-Seg) is frozen (Figure 2e) to preserve its learned knowledge. However, Table 3 shows that allowing the OMG decoder to be tuned can actually improve the referring segmentation performance. It is not explained why this module should be frozen in other tasks, or whether this practice would bring further gains.
Technical Quality: 2
Clarity: 2
Questions for Authors: In addition to the weaknesses mentioned above, there are two minor questions:
1. What is the “pixel shuffle operator” (Line 171)?
2. To be consistent with Equation 2, should the segmentation mask be written as $\mathcal{M}\in\mathbb{R}^{HW\times N_q}$ instead of $\mathcal{M}\in\mathbb{R}^{N_q\times HW}$?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors have discussed the technical limitations, but have not discussed potential societal impacts. There is no explanation about why this work has no societal impacts, either.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. [Performance] Although this work claims OMG-LLaVA as a generalist model, it cannot outperform prior state-of-the-art models on any specific task.**
**A**: We have updated the performance of OMG-LLaVA, as shown in Table R1. OMG-LLaVA outperforms LISA, PixelLM, and GSVA on the RES benchmarks. GLaMM used the GranD dataset for pretraining. The GranD pretrain dataset contains 11M images, far exceeding the GranD-f dataset used by OMG-LLaVA, which only includes 210K (0.21M) images. Nevertheless, OMG-LLaVA still surpasses GLaMM on the GCG benchmark by 0.6 CIDEr, 1.4 AP50, and 0.1 mIoU.
Additionally, OMG-LLaVA achieves substantial performance gains over GLaMM, LISA, and PixelLM on image-level benchmarks (Table R2). For instance, when jointly co-training with image-level and segmentation-based datasets, OMG-LLaVA achieves 1412 on MME benchmark, outperforming LISA, PixelLM and GLaMM with 1410, 968 and 1389, respectively. When only trained on the LLaVA dataset, OMG-LLaVA outperforms LLaVA-1.5 by impressive margins of 41, 3.0, 3.0, and 5.1 points on MME, SEED-Bench, POPE, and AI2D, respectively.
---
**Q2. [Technical contribution] Overall, OMG-LLaVA seems a direct integration of OMG-Seg and LLaVA.**
**A**: Our most significant contribution is introducing an elegant MLLM architecture that comprises only a perception model and an LLM, capable of understanding and reasoning at the image, object, and pixel levels. The frozen perception decoder can simultaneously encode visual prompts and decode [SEG] tokens. However, our experiments and those in LISA have shown that the frozen perception decoder cannot effectively decode [SEG] tokens. The proposed perception prior embedding is the key component enabling our meta-architecture to work well. Thus, we have three main contributions.
- First, we propose an elegant MLLM architecture with a single visual encoder, a perception head, and an LLM, which can simultaneously handle image, object, and pixel-level tasks.
- Secondly, we introduce perception prior embedding to help the LLM better align pixel-centric visual tokens with object-centric visual tokens, allowing the LLM to generate object-centric tokens that the frozen perception decoder can decode based on the input embedded perception prior.
- Thirdly, we construct OMG-LLaVA, which achieves SOTA performance on pixel-level tasks and comparable accuracy to SOTA on object-level tasks. Additionally, OMG-LLaVA significantly outperforms versatile MLLMs like LISA, PixelLM, and GLaMM in image-level tasks.
From these aspects, we argue that our work is not the direct integration of OMG-Seg and LLaVA.
---
**Q3. [Image-level tasks] Although this work claims OMG-LLaVA can unify image-level, object-level, and pixel-level reasoning and understanding tasks, there is no quantitative evaluation in terms of image-level tasks such as VQA.**
**A**: Table R2 presents our model's performance on various image-level benchmarks, including MME, MMBench, SEED-Bench, POPE, and AI2D. Jointly co-training on segmentation and image-level data can detrimentally impact an MLLM's performance on image-level benchmarks. However, OMG-LLaVA mitigates this negative impact thanks to its perception prior embedding. OMG-LLaVA significantly outperforms GLaMM, PixelLM, LISA, and LaSagnA on these image-level benchmarks. OMG-LLaVA scores 1412 on MME and 47.7 on MMBench, surpassing GLaMM with 1389 on MME and 11.1 on MMBench. By training solely on image-level data, OMG-LLaVA achieves superior performance over LLaVA 1.5 on benchmarks such as MME, SEED-Bench, POPE, and AI2D, attributed to its perception prior embedding.
---
**Q4. [Frozen perception module] In OMG-LLaVA, the “universal perception module” (OMG-Seg) is frozen (Figure 2e) to preserve its learned knowledge. However, Table 3 shows that allowing the OMG decoder to be tuned can actually improve the referring segmentation performance. It is not explained why this module should be frozen in other tasks, or whether this practice would bring further gains.**
**A**: During finetuning, the OMG decoder is duplicated and unfrozen to better decode the [SEG] tokens generated by the LLM. The encoding of object-centric tokens and visual prompts continues to be handled by the original frozen OMG decoder. We also add the experiment of keeping the OMG decoder frozen when finetuning the model. As shown in the last two lines of Table R1, finetuning on the RES datasets while keeping the OMG decoder frozen results in a slight performance degradation compared to the case where an additional OMG decoder is duplicated and unfrozen (78.0 mIoU *vs.* 77.2 mIoU). However, the performance still significantly surpasses LISA and PixelLM. Moreover, we directly unfreeze the original OMG decoder without duplicating it. In that case, OMG-LLaVA cannot work well due to the reliance of object-centric tokens on a stable mask segmentation capability.
---
**Q5. What is the “pixel shuffle operator” (Line 171)?**
**A**: This paper leverages the pixel shuffle operator to downsample image features. By flattening $S \times S$ neighboring pixel features, the operator reduces the image size from $(H, W, C)$ to $(H/S, W/S, C \times S^{2})$.
---
**Q6. To be consistent with Equation 2.**
**A**: Thank you for the suggestion. We have modified $M \in R^{N_{q}\times HW}$ to $M \in R^{HW \times N_{q}}$.
---
Rebuttal Comment 1.1:
Title: Please let us know whether we address all the issues
Comment: Dear reviewer,
Thank you for the comments on our paper.
We have submitted the response to your comments and a PDF file. Please let us know if you have additional questions so that we can address them during the discussion period. We hope that you can consider raising the score after we address all the issues.
Thank you | Rebuttal 1:
Rebuttal: # General Responses
---
Dear Reviewers,
We thank all the reviewers for the detailed suggestions. All reviewers acknowledge the technical contributions of our work, including the new unified designs for MLLM and comprehensive benchmark evaluation. We listed additional important experiments and answered common questions here. We will respond individually to the corresponding reviewers for the detailed questions of each reviewer.
---
## 1. Evaluation on image-level benchmarks (R#czBJ and R#ck7P)
We have evaluated the performance of OMG-LLaVA on multiple image-level benchmarks, including MMbench, MME, SEED-Bench, POPE, and AI2D. The results are presented in Table R2 (please refer to the submitted pdf). When jointly co-trained with both image-level and segmentation-based data, OMG-LLaVA significantly outperformed LISA, GLaMM, PixelLM, and LaSagnA on all image-level benchmarks. For example, OMG-LLaVA scores 1257 on MME and 45.7 on MMBench, surpassing GLaMM with 1234 on MME and 8.9 on MMBench. We list these results in the pdf file.
However, co-training with segmentation-based data significantly decreases performance on image-level benchmarks (This is also verified in previous work, PixelLLM (https://arxiv.org/pdf/2312.02228), see the last page of their arxiv.), although OMG-LLaVA can mitigate this issue to a large extent through perception prior embedding. We also evaluate the performance of OMG-LLaVA trained solely on the LLaVA-1.5 dataset on image-level benchmarks. When using the same LLM, OMG-LLaVA outperformed LLaVA 1.5 on MME, SEED-Bench, POPE, and AI2D by 41, 3.0, 3.0, and 5.1 points, respectively.
---
## 2. Performance of OMG-LLaVA (R#czBJ and R#Timw)
We have fixed some bugs and updated the performance of OMG-LLaVA, as shown in Table R1 (please refer to the submitted pdf). OMG-LLaVA achieves 78.0, 69.1 and 72.9 mIoU on refCOCO, refCOCO+ and refCOCOg benchmarks, outperforming LISA, PixelLM, and GSVA on the RES benchmarks. GLaMM uses the GranD pre-train dataset for pretraining. The GranD pretrain dataset contains over 11M images, **far exceeding** the GranD-f dataset used by OMG-LLaVA, which **only** includes 210K (0.21M) images. Nevertheless, OMG-LLaVA still surpasses GLaMM on the GCG benchmark by 0.6 CIDEr, 1.4 AP50, and 0.1 mIoU.
We have conducted experiments with OMG-LLaVA using stronger LLMs, such as Phi3 and Qwen2, and the results are presented in Table R3. With the stronger LLM Qwen2-7B, OMG-LLaVA performs better on pixel- and image-level benchmarks.
In addition, we will consider adding more segmentation and caption datasets for co-training, which will serve as our future work.
---
## 3. The contributions of OMG-LLaVA (R#czBJ and R#ck7P)
Our most significant contribution is introducing an elegant MLLM architecture that comprises only a perception model and an LLM, capable of understanding and reasoning at the image, object, and pixel level. This is a systematic contribution. To the best of our knowledge, we are the first open-source system to fulfill that goal in the MLLM domain using one image encoder, one image decoder, and one LLM. The frozen perception decoder can simultaneously encode visual prompts and decode [SEG] tokens. However, our experiments and those in LISA have shown that the frozen perception decoder cannot effectively decode [SEG] tokens. To better adapt the segmentation model to LLM, we propose the perception prior embedding module, which is the key component enabling our meta-architecture to work well.
In addition, compared with existing methods, OMG-LLaVA boasts an elegant model architecture and more streamlined workflows. For instance, in GLaMM, visual prompt encoding and segmentation mask decoding employ entirely different workflows and models. In OMG LLaVA, visual prompt encoding and segmentation mask decoding are unified into a single OMG Decoder to generate object-centric queries. Maintaining a consistent workflow while unifying all tasks is extremely challenging and currently difficult. For example, mask decoding and visual prompt encoding are only possible with LLMs that rely on other modules. OMG-LLaVA significantly outperforms versatile MLLMs like LISA, PixelLM, and GLaMM in image-level tasks.
---
## 4. More ablation studies (R#Timw and R#UauW)
Following the reviewer's suggestion, we add additional ablation studies for better clarity. We have included ablation studies on the projector and perception prior embedding. The results are shown in Tables R4 and R5.
Table R4 presents the results of ablation studies on the projector. We observed a substantial performance degradation when pixel-centric and object-centric tokens shared a projector. This phenomenon can be attributed to the distinct semantic and perception information encoded in these two types of tokens. Sharing a projector results in detrimental interference between them. Furthermore, our experiments with additional cross-attention layers in the projector did not yield any performance benefits. For the object-centric tokens, specifically visual prompt tokens and object queries, sharing a projector resulted in marginal performance gains for both segmentation and visual prompt understanding tasks.
We have conducted ablation studies on different strategies for assigning perception embeddings to each pixel token, and the results are shown in Table R5. The best results are achieved by assigning the corresponding object query to each pixel using the SoftMax operator. The strategy of generating perception embeddings through L1 normalization causes performance degradation, but it still brought a significant performance improvement compared to not using the perception prior embedding strategy.
---
We will fix all the possible issues and improve the manuscript. To address all your concerns and questions, we prepared a comprehensive response, including additional experiments where necessary. If you have any further questions, please don't hesitate to interact with us.
Best regards
Pdf: /pdf/7b620da83ece6aeff607b56845af40adc647d087.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning to be Smooth: An End-to-End Differentiable Particle Smoother | Accept (poster) | Summary: This paper proposes an end-to-end differentiable particle smoother and presents its application to global localization of a subject moving through real-world city-scale environments. Although there exists prior work in differentiable particle *filtering*, such as Mixture Density Particle Filters (MDPF), *smoothing* yields a more accurate estimate for offline data processing as it incorporates all the observations taken along the full trajectory. This paper revisits the classic formulation of Two Filter Smoothing (TFS) and leverages its principle to fuse two MDPFs, one running forward in time and one backward, into the novel Mixture Density Particle Smoother (MDPS). MDPS achieves higher log-likelihood and better tracking accuracy compared to MDPF and other prior work.
Strengths: * To my knowledge, this paper proposes the first framework for end-to-end differentiable particle smoothing. Smoothing is a powerful tool for offline state estimation and expected to achieve better accuracy than filtering. The proposed MDPS is a valuable framework for offline state estimation that does not require hand-designed state transition or observation models.
* The technical approach is a sound combination of MDPF and TFS. Although I have some remaining questions about the underlying math (which is discussed below), the authors seem to have successfully designed their differentiable smoothing framework by leveraging the principled TFS formulation.
* Empirical results show a clear advantage of the smoothing framework compared to its filtering counterpart as well as other prior work based on non-probabilistic schemes (e.g. refinement).
Weaknesses: [EDIT] All major concerns have been resolved after the rebuttal.
* Some of the math presented in Section 3 for TFS seems to be incorrect. Specifically, I do not think equation (14) holds; it seems to be missing $p(y_t \mid x_t)$ from its numerator. Similarly, the resulting equation (15) for the particle weights does make sense either; multiplying a weight by a weighted sum of estimated states would not result in a weight. It would instead result in a new state. Please check (14) and (15) again and correct any errors. Also, please clarify how (14) and (15) relate to the practical implementation of MDPS presented in Section 4. In its current form of the paper, (14) and (15) seem unnecessary for MDPS because we instead have the correct equation (16) to begin with.
* Many figures should be updated for better readability when printed on paper. Figures 1, 4, 7 - 11 have map patches and RGB observations that are too small. I suggest that the authors increase the font size as well and make each component in the figures bigger. The same goes for Figures 3 and 6, where the legends are too small to associate different curves with different methods.
More details on MDPF would be appreciated at the end of Section 2 for self-completeness of the information. Specifically, I did not fully follow what the “adaptive” variant means and why two bandwidths was necessary.
* More details on MDPF would be appreciated at the end of Section 2 for clarity. Specifically, I did not fully follow what the “adaptive” variant means and why two bandwidths were necessary.
Technical Quality: 3
Clarity: 2
Questions for Authors: [EDIT] All major concerns have been resolved after the rebuttal.
* For MDPS, did you ever try initializing the estimates from some random states rather than the noised version of the true states? I wonder whether smoothing can actually correct erroneous initial estimates. If true, that is another advantage compared to MDPF, since we cannot hope for filtering to fix errors in initial estimates at all.
* MDPS requires more GPU runtime to train than MDPF by design, as presented in Tables 1, 2, and 3. Would MDPF still perform worse than MDPS even when you let MDPF train longer to match the GPU runtime of MDPS? In other words, would the loss and the accuracy of MDPF not improve further with additional training epochs?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: * As a fundamental limitation of the proposed approach, training requires three separate stages and thus takes more GPU runtime. I would appreciate it if the authors could comment on the tradeoff between better performance and more demand for compute when compared to MDPF.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive comments and constructive feedback.
Please see our global response to all reviewers for discussion of training time and paper figure formatting.
We agree that adding some discussion of accuracy vs compute demands of MDPF and MDPS would be useful, and will do this. In general MDPS will have higher compute demands than MDPF due to running two particle filters, but both methods scale similarly with the number of particles, and our experiments show that the constant-factor cost increase of MDPS may substantially boost accuracy. Note that appendix B discusses accuracy vs. compute demands for MDPS with respect to the number of particles used.
We would like to thank you for taking the time to carefully review our math. Indeed there are factors missing from equations (14) and (15), below are the corrected versions (with the corrections highlighted in red):
**Equation (14):**
$p(x_t|y_{1:T}) \propto \frac{\tilde{p}(x_t| y_{t+1:T}) \color{red} p(y_t|x_t) \color{black} \int p(x_t|x_{t-1}) p(x_{t-1} | y_{1:t-1}) dx_{t-1}} {\gamma_t(x_t)}$
**Equation (15):**
$\overleftrightarrow{w}_{t}^{(i)} \propto \overleftarrow{w}\_{t}^{(i)} \sum\_{j=1}^N \color{red} \overrightarrow{w}\_{t-1}^{(j)} \color{black} \frac{p(\overleftarrow{x}\_{t}^{(i)} | \overrightarrow{x}\_{t-1}^{(j)})}{\gamma_t(\overleftarrow{x}\_{t}^{(i)})}$
Equations (14) and (15) are included in the paper to contrast our MDPS formulation with the two-filter formulation employed by most prior work. You are correct that they are not directly used in the derivation of MDPS in Sec. 4, and thus these missing terms do not impact the correctness of our results.
We agree that the details on MDPF in Sec. 2 are brief, and that expanding this section could be beneficial. In short, the adaptive MDPF as described in [14] is simply a particle filter that decouples the distribution used for resampling and the distribution used as the posterior estimate. The rationale is that the distribution used for resampling particles may not necessarily be appropriate for estimating the state posterior, since the latter may concentrate probability mass more tightly when observations are informative. Adaptive MDPF decouples the resampling and posterior bandwidths and learns each separately, allowing the resampling and posterior distributions to be separately tuned for each application.
**To answer your specific questions:**
- With regard to MDPS initialization, in the Mapillary and Kitti experiments we initialize MDPS with particles within approximately 150 meters of the true state with the localization area only being 614x614 meters (i.e. particles are drawn randomly from about ~19% of the localization area). This initialization results in erroneous initial estimates for MDPF but interestingly we find that MDPS is able to correct these erroneous initial estimates by using information from the backward filter. Given this, we believe that MDPS could correct initial states when initialized with totally random particles.
- Please see our global response to all reviewers for discussion of training time.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: I greatly appreciate the authors for taking the time and effort to prepare the rebuttal. The initial response by the authors have largely addressed my concerns, and I will increase my score accordingly. Below please find my response.
* > We agree that the details on MDPF in Sec. 2 are brief, and that expanding this section could be beneficial. In short, the adaptive MDPF as described in [14] is simply a particle filter that decouples the distribution used for resampling and the distribution used as the posterior estimate. The rationale is that the distribution used for resampling particles may not necessarily be appropriate for estimating the state posterior, since the latter may concentrate probability mass more tightly when observations are informative. Adaptive MDPF decouples the resampling and posterior bandwidths and learns each separately, allowing the resampling and posterior distributions to be separately tuned for each application.
Thank you for clarifying. Please include this piece of information into the paper if the space permits.
* > With regard to MDPS initialization, in the Mapillary and Kitti experiments we initialize MDPS with particles within approximately 150 meters of the true state with the localization area only being 614x614 meters (i.e. particles are drawn randomly from about ~19% of the localization area). This initialization results in erroneous initial estimates for MDPF but interestingly we find that MDPS is able to correct these erroneous initial estimates by using information from the backward filter. Given this, we believe that MDPS could correct initial states when initialized with totally random particles.
This seems to be a reasonable performance improvement to expect, over any filtering-based approaches. I highly recommend the authors to highlight this point in the paper so the fundamental advantage of smoothing is made even clearer. | Summary: My understanding is that this work seeks to learn a state-space model by differentiating through a regularised version of the sequential Monte Carlo (SMC) approximation of the the (generalised) two-filter smoother.
Strengths: I am not very familiar with the applications used as numerical examples but they seem to be reasonably close to real-world scenarios.
Weaknesses: **Lack of clarity & rigour**
I do not believe that the presentation of this work is adequate for NeurIPS. The manuscript is much too informal and therefore very difficult to follow and verify. It reads like a mid-1990s engineering conference paper. For example,
* The work should really include some pseudo-code. Without this, it is very hard to even understand what the proposed algorithm exactly is. It is not even entirely clear to me whether the proposed method seeks to learn only the measurement model or also the system equation. The short paragraph on "Training details" at the end of Section 4 is also highly insufficient.
* Throughout Section 3 and 4, the same symbols seem to be used for model densities and their (e.g. Monte Carlo or other) approximations
* Lines 70/71 claim: "Our differentiable PS incorporate stratified resampling to reduce variance with negligible computational overhead, making training more robustly avoid local optima". I don't find evidence for this and it is not clear to me how the authors would achieve differentiable stratified resampling.
* Some notation is not or insufficiently clearly defined, e.g. $\hat{w}^{(i)}$ and its gradient in Equation 7.
* Line 108 mentions "unbiased" gradient estimates but I cannot see how this would be true for any of the proposed methodology.
**Missing baselines**
There are well-known SMC based, recursive parameter-estimation methods against which the method should really be compared, e.g.:
Kantas, N., Doucet, A., Singh, S. S., Maciejowski, J., & Chopin, N. (2015). On particle methods for parameter estimation in state-space models.
**Insufficient motivation**
It is not clear to me what the motivation for the proposed approach is, except that no one has previously tried to differentiate through the two-filter smoothing approximation.
Technical Quality: 1
Clarity: 1
Questions for Authors: Can you explain how the proposed method achieves differentiable stratified resampling?
Confidence: 3
Soundness: 1
Presentation: 1
Contribution: 1
Limitations: I appreciate the authors' candour that the proposed method will suffer from a curse of dimension.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and feedback.
**To address your concerns:**
- Our MDPS method learns both the measurement model and the dynamics model via end-to-end training. The “Training Details” paragraph at the end of section 4 was intended to highlight the 3-stage training approach. Additional training details for reproducibility can be found in the appendix.
- We apologize that aspects of the notation were confusing, we will add more clarifications. With regard to $\hat{w}^{(i)}$ in Eq. (7), this refers to the unnormalized weight assigned to the resampled particle $\hat{x}^{(i)}$. It has weight 1 for the current model parameters, but weights must be adjusted as dynamics and observation parameters change during training, as captured by the gradient equation.
- Line 108 mentions how unbiased gradient estimates may be produced via the Importance Weighted Sample Gradients (IWSG) estimator proposed in [14]. IWSG computes gradients based on importance sampling [21]. Briefly, importance sampling itself produces unbiased weighted samples from some distribution of interest, and therefore differentiating through importance sampling yields unbiased gradients as well. Experiments in [14] show that this approach has dramatically reduced variance compared to reparameterization-based gradient estimators.
- Thank you for pointing out the review of classic work by Kantas et al. (2015), we will add citation and discussion. These parameter estimation techniques make different assumptions than our work: they learn generative rather than discriminative particle filters, and assume that parametric dynamics and likelihood models with only a few unknown parameters are available. (Experiments in Kantas have only 3 unknown parameters, and we are not aware of any classic work with more than a few dozen learned parameters.) These methods would not practically scale to datasets with large numbers of sequences as we consider, because particle state estimates must be produced for every time step at every iteration. Moreover, the neural network parameterizations of our dynamics and measurement models have over 500 million parameters, millions of times larger than work cited by Kantas. The computational cost of classic particle learning algorithms scales poorly with the number of parameters, and it would be practically impossible to apply these methods to our models (without substantial innovations). Indeed, these limitations motivated our research on the MDPS method.
**To answer your specific questions:**
- Stratified resampling [31,32] is a method of drawing i.i.d. samples from some target distribution while reducing the variance associated with those samples. These samples can be thought of as uniformly weighted samples from the target distribution (i.e. where all the weights are equal). To compute gradients for these samples we use Importance Weighted Sample Gradients (IWSG) [1] which can be applied to any sampling method that produces i.i.d. samples. IWSG propagates gradients via adjusting the weights of the drawn samples (which are initially uniform), yielding a differentiable stratified resampling technique that has the lower sampling variance properties of stratified resampling and the unbiased gradients of IWSG.
**Additional Reference:**
Kantas, N., Doucet, A., Singh, S. S., Maciejowski, J., & Chopin, N. (2015). On particle methods for parameter estimation in state-space models.
---
Rebuttal 2:
Title: Proof of unbiasedness
Comment: > Line 108 mentions how unbiased gradient estimates may be produced via the Importance Weighted Sample Gradients (IWSG) estimator proposed in [14]. IWSG computes gradients based on importance sampling [21]. Briefly, importance sampling itself produces unbiased weighted samples from some distribution of interest, and therefore differentiating through importance sampling yields unbiased gradients as well. Experiments in [14] show that this approach has dramatically reduced variance compared to reparameterization-based gradient estimators.
Thank you. I have now read [14]. It seems to me that the IWSG from [14] approximates the particle approximation of each filter by a kernel density estimate which is indeed differentiable.
However, due to the introduction of this extra layer of (kernel-density) approximation, I don't see how the IWSG method from [14] would give unbiased gradients of the **original** particle filter. It might give unbiased gradients of some (likely biased) approximation. But **having unbiased gradients for an approximation of a particle filter is not the same as having unbiased gradients for the particle filter itself.**
Unfortunately, like the present manuscript, [14] is written in a highly informal manner and does not seem to provide any proof of the claimed unbiasedness. I would be willing to raise my score if the authors could actually provide a proof of the claim that the proposed method from [14] gives unbiased gradients for the **original** particle filter. In my view, providing rigorous proofs of such claims is a must for a NeurIPS publication.
---
Rebuttal Comment 2.1:
Title: Unbiased gradients of stratified resampling
Comment: > Stratified resampling [31,32] is a method of drawing i.i.d. samples from some target distribution while reducing the variance associated with those samples. These samples can be thought of as uniformly weighted samples from the target distribution (i.e. where all the weights are equal). To compute gradients for these samples we use Importance Weighted Sample Gradients (IWSG) [14] which can be applied to any sampling method that produces i.i.d. samples. IWSG propagates gradients via adjusting the weights of the drawn samples (which are initially uniform), yielding a differentiable stratified resampling technique that has the lower sampling variance properties of stratified resampling and the unbiased gradients of IWSG.
To raise my score, I would want to see an actual proof that the claim "IWSG gives unbiased estimates of (stratified) resampling" is true. As discussed in my previous comment, what the present method at most gives is an unbiased estimate of a gradient of some (very likely biased) **approximation** of the particle filter (for any arbitrary resampling scheme).
---
Reply to Comment 2.1.1:
Title: Re: Unbiased gradients of stratified resampling
Comment: We emphasize that the “true” particle filter that we are trying to approximate, which will be run at test-time, is a “regularized” particle filter that uses KDEs to approximate marginals. Please see our other comment (Re: Proof of unbiasedness) for more details on this point.
For the KDE resampling step, some number of particles will be drawn from each of the KDE mixture components, before continuous sampling from that component’s kernel (a Gaussian or von Mises density in our experiments). While stratified resampling allocates particles to mixture components via a distinct algorithm, the expected number of particles allocated per component is identical to classic multinomial resampling. See [31, Sec. 2.3] for a proof of this. Then by the linearity of the expectation operator, gradients of this stratified resampling step may be computed via the same importance-sampling estimator we use for multinomial resampling.
We appreciate the question, and agree this point did not have adequate detail in our submitted manuscript. We will add additional detail (adapting the proof in [31] to our setup/notation) in future revisions.
---
Rebuttal Comment 2.2:
Title: Re: Proof of unbiasedness
Comment: We did not emphasize this due to space constraints, but the theoretical foundations of KDE-based particle filters (which are standardly called “regularized” particle filters in the literature) are quite strong. The cited paper by Musso et al. [40] gives an overview, and detailed results are developed by Le Gland and Oudjane (Annals of Applied Prob. 2004). For conventional discrete-resampling particle filters, theoretical results show weak convergence to the optimal filter posterior as the number of particles becomes large. Regularized particle filters have similar guarantees, but by directly approximating the posterior density, one can also obtain strong convergence (in variational distance or other Lp norms) to the optimal posterior. Moreover, while standard particle filters require strong “mixing” assumptions on the dynamics to avoid degeneracy, regularized filters behave better (in both theory and practice) for low-noise dynamics due to the diversity provided by their continuous resampling step.
By “original” particle filter, we assume you mean the basic “bootstrap” particle-filter that does discrete resampling with replacement. Computing gradients for such particle filters is not a goal of our paper, because they are inferior (in both theory and practice) to regularized (KDE-based) particle filters. At test time, our mixture density particle filters/smoothers employ KDE regularization, and thus this regularization must also be accounted for when computing gradients during training. As detailed in [14, Sec. 4], the IWSG method does give an unbiased estimate of gradients for the regularized, KDE resampling step. The mathematical proof is simple because we build on classic importance sampling principles, but it is nevertheless novel (in this context) and empirically effective.
In the classic particle filtering literature, regularized particle filters were already motivated by having greater test-time stability/robustness, as well as superior theoretical guarantees. One interesting conclusion of our work is that in addition to having these advantages, the smoothness of regularized particle filters also has the advantage of making accurate gradient estimation easier. Experimentally, differentiable particle filters based on discrete-resampling perform dramatically worse than our approach (see the TG-PF and SR-PF baselines in Figure 2 of this submission, and the wider experimental comparison in [14, Figure 5]).
**Reference:**
Le Gland, F., & Oudjane, N. (2004). Stability and uniform approximation of nonlinear filters using the Hilbert metric and application to particle filters. The Annals of Applied Probability, 14(1), 144-187. | Summary: The author's propose the first differentiable particle smoother, building on the MDPF of Younis and Sudderth which replaces the multinomial resampling step with a sampling from a KDE mixture. Their smoother is a two filter smoother where both the forwards and backwards filters are MDPFs. Instead of using the forwards filter to re-weight the particles from the backwards filter, as is conventional, particles are drawn directly from the mixture of the forwards and backwards posteriors. The author's test their smoother, first a toy bearings only tracking example. Then the a more complicated visual localisation task as well and the KITTI dataset. The smoother out performs the state-of-the-art filtering and search-based methods tested.
Strengths: 1. A differentiable particle smoother is the obvious next step to differentiable filtering, especially given much previous work trains their filters in an offline, batched way on with access to ground truth latent state in training. I.e. on smoothing problems. This is the first paper to successfully construct a differentiable smoother capable of learning the complete model.
2. The experiment results are very strong and give good credibility to the algorithm. The used scenario is considerably more complex than those often used to test algorithms in a similar class.
3. Using the direct product of the mixtures simplifies the calculations greatly.
4. The paper is well organised and easy to follow.
Weaknesses: 1. This method and it's predecessor the MDPF appear to enjoy less theoretical support than other differentiable particle algorithms such as \'{S}cibior and Wood's stop gradient resampling or Thornton, Corenflos et al.'s OT resampler. For example neither the gradient estimates nor state estimates appear to be consistent, or at least there is no proof they are.
2. Unlike the MDPF, this smoother requires the pointwise evaluation of the posterior density. This differs from the usual two filter smoother, I am unconvinced that the KDE approximation is accurate enough for this purpose. Some more analysis could be used to justify this approach.
3. This method has potential impact outside of just visual localisation, instead of several visual localisation experiments an example of a different problem could have boosted the apparent impact.
4. The text on the figures is far too small.
Technical Quality: 3
Clarity: 3
Questions for Authors: None.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: It is mentioned in the limitations that the curse of dimensionality may be lifted somewhat with smarter proposal. There is no evidence that this algorithm nor the MDPF is adept at learning a non-bootstrap proposal.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your praise and feedback. We are grateful for the careful review of our work and appreciate your highlights on the novel nature of our differentiable particle smoother, and the strength of our experimental results compared to scenarios considered by related work.
**To address some of your concerns please see below:**
- We agree that KDE density approximations may not always be accurate, but established theory [41] tells us that KDEs consistently approximate any smooth density with a sufficient number of samples, and our experiments suggest they can provide accurate approximations with practical numbers of particles.
- We too agree that our method has impacts outside of just visual localization and plan to apply our techniques to other tasks in future work. But we would like to emphasize that given the complexity of the real-world datasets considered in our experiments, achieving these advances to the state-of-the-art in visual localization already required substantial work.
- Please see our global response for more information on figures and formatting.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Reading the other reviews, I agree that the full pseudo-code should appear, even if just in an appendix. A couple of questions: I am personally not very familiar with the theory of KDE-based (or regularised) particle filters, does consistency hold in the particle filtering case of recursive approximation? If such results exist they could be mentioned in the text. I missed the details on the adaptive MDPF on first reading. How does this strategy differ from the well known auxiliary particle filter [Pitt and Shepard 1999], that also uses a different distribution for weighting and resampling, other than that the distributions are both learned rather than analytically optimised?
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to respond to our rebuttal. We will add pseudo-code to the appendix in later revisions of this paper.
To answer your questions:
**I am personally not very familiar with the theory of KDE-based (or regularised) particle filters, does consistency hold in the particle filtering case of recursive approximation?**
We did not emphasize this due to space constraints, but the theoretical foundations of KDE-based particle filters (which are standardly called “regularized” particle filters in the literature) are quite strong. The cited paper by Musso et al. [40] gives an overview, and detailed results are developed by Le Gland and Oudjane (Annals of Applied Prob. 2004). For conventional discrete-resampling particle filters, theoretical results show weak convergence to the optimal filter posterior as the number of particles becomes large. Regularized particle filters have similar guarantees, but by directly approximating the posterior density, one can also obtain strong convergence (in variational distance or other Lp norms) to the optimal posterior. Moreover, while standard particle filters require strong “mixing” assumptions on the dynamics to avoid degeneracy, regularized filters behave better (in both theory and practice) for low-noise dynamics due to the diversity provided by their continuous resampling step.
In this context, one can think of the KDE smoothing as adding a small amount of regularization/bias to the density estimate in order to reduce variance. For classic regularized particle filters, this bias is decayed to zero as the number of particles increases by reducing the kernel bandwidth (theorems characterize the optimal convergence rate). For our mixture density particle filters/smoothers, the bandwidth is implicitly tuned to the given application and particle budget via the training process.
**I missed the details on the adaptive MDPF on first reading. How does this strategy differ from the well known auxiliary particle filter [Pitt and Shepard 1999], that also uses a different distribution for weighting and resampling, other than that the distributions are both learned rather than analytically optimised?**
Great question. The classic auxiliary particle filter is a generative model that tries to look “one step ahead” in the discrete resampling step, by approximating the likelihood of the next observation. However to do this, it needs to be able to quickly compute a mean/mode of the dynamics, which is easy for many classic models (e.g., if dynamics are a fixed non-linear function plus Gaussian noise) but not for our scenario (where dynamics are parameterized by a learned neural network and may be complex, see Fig. 5).
In contrast, our tuning of the KDE resampling bandwidth is not determined by a one-step lookahead of the next observation, but instead by the behavior of the particle filter over many future time steps (as quantified by backpropagation of loss gradients). Interestingly, it is not clear how to do this tuning based on theoretical results for standard particle filters, so we are unaware of direct analogs of our approach in the classic particle filter literature.
In some sense, our adaptive proposals and the auxiliary PF are complementary to each other, and in future research it would be interesting to explore variants of the auxiliary PF idea that accommodate the more complex dynamical models we consider.
**Reference:**
Le Gland, F., & Oudjane, N. (2004). Stability and uniform approximation of nonlinear filters using the Hilbert metric and application to particle filters. The Annals of Applied Probability, 14(1), 144-187. | Summary: The paper proposes a learnable differentiable particle smoother system that extends an existing differentiable particle filter to a smoother. The method utilizes two independent particle filters for the two smoothing directions with importance sampling to address computation scaling issues.
Strengths: - Proposed smoothing approach outperforms filtering setting
- Combination of sensible components into the proposed filtering setup
Weaknesses: While the proposed method appears to work and improves over the baseline particle filter variant it is unclear how well it performs in contrast to other smoothers which would pose a fair comparison. Additionally, the particular task of large scale global localization is something that robotics has been investigating for a long time and as such one would expect to see comparisons to the typical way this is solved, such as full visual SLAM or hybrid place recognition with localization. As the KITTI dataset is used it is also disappointing that no comparison to the benchmark results of the dataset are presented. Overall it is unclear how well the proposed method works when compared to known approaches deployed in practice.
The power of a learnable filter/smoother is that it can learn models where it is hard or impossible to define them manually. However, the chosen application has very well defined and workable models making the motivation less convincing. As such it would be highly interesting to see the method be evaluated on scenarios where no models exist or even exist and are compared with. In line with that it would also be interesting to know how feasible it is to run the proposed method on higher dimensional data.
The method requires a significant amount of time to train and run which makes it questionable whether it is even practical for real applications. Another issue arises from this is that different methods were provided with different amount of training time. While it is valid to see how well methods work when given the same parameters (particles) it is also important to assess how good other methods would perform if they had access to the same amount of training time which for the MDPF vs MPDS is still twice as much.
The paper mentions that given true dynamics and likelihood models the sampling will compensate for the mismatch between samples and density mixture. Is that statement meant to indicate that these ground truth models are required, or that in an ideal case this would technically happen? If the first meaning is intended, where do these models come from? If the second meaning is intended, how close does the proposed method get to this and is there a way to assess that?
It appears that the absolute error of the proposed method and all others is exceedingly large with being many meters off to achieve any reasonable amount of recall. This is where reproducing numbers from the KITTI benchmark is crucial to get an idea of the hardness of the problem.
Figure 5 is hard to interpret, especially since there are now comparisons to other methods provided. While one can see that the prediction somewhat maps to parts of the future state it is very noisy and unclear how good it actually is.
Given the massive training times of the method, it would be important to provide insights into the scalability both in terms of number of particles and dimensionality of the estimation problem. Another question that arises in that regard is how the proposed method compares to methods based around Stein variational inference [1] which is capable of handling high-dimensional problems very efficiently.
Overall, while the proposed method performs better than other filtering methods it is unclear how practical the method is given the required compute time and limitation in the dimensionality of problems. This is further exarberated by the fact that there is no comparison to typical solutions of the chosen problem setups and a lack of particle smoother baselines.
[1] Liu, Qiang, and Dilin Wang. "Stein variational gradient descent: A general purpose bayesian inference algorithm." *Advances in neural information processing systems* 29 (2016).
Technical Quality: 2
Clarity: 3
Questions for Authors: - With regards to the sampling, in the paper M is selected to be equal between the two filter directions. What's the impact of selecting M and it's distribution onto the two distributions?
- In the description of baseline methods it is mentioned that refinement methods require an "accurate initial estimate". How good does this have to be in practice and/or in the specific experiments shown in the paper?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The limitation section is very basic and really is about the limitation of the family of methods in general rather than the proposed method. Expanding on that and providing information about the proposed method would be expected.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and feedback. Please see our global response for discussion of training time and paper figure formatting.
With regard to comparison with existing works, we would like to make the distinction between local localization and global localization. Local localization estimates the state in relation to the starting point. Global localization requires estimating the state in relation to a global map origin. Visual SLAM systems almost exclusively solve the local localization task, using the starting position as the origin of their estimated map. If a map is provided, then just the localization part of Visual SLAM can be run, but detailed visual or 3D maps of the environment are needed. These have prohibitive memory requirements at city scales and need constant updating as the visual appearance of the environment changes (i.e. with the weather/seasons) [11]. Hybrid place recognition with localization also requires detailed visual or 3D maps [Wang et al., 2023]. Instead we seek to use planimetric maps for global localization, which are compact and robust to environment changes. It is not obvious how to apply SLAM/Hybrid place recognition systems to this type of map.
We use the KITTI dataset in a way that makes comparison against the KITTI leaderboard impossible. The leaderboard ranks methods for local localization with no prior map provided. The methods seek to only accurately estimate relative position without regard to global position. In contrast, our method solves the global localization task using a map where the starting state itself is uncertain. Comparison of our method to the leaderboard would not be appropriate, as we are completing a different task. We instead compare against state-of-the-art prior methods for global localization (see Figs. 3 and 6) to illustrate task difficulty. Compared to the prior state-of-the-art [11], we have both higher accuracy and lower localization cost at test-time.
Figure 5 highlights the effectiveness of the learned complex, nonlinear dynamics model at propagating particles given noisy actions. Particles are spread to ensure some overlap with the true state, and the measurement model then concentrates probability mass over the true state using the latest observation. Other methods do not have dynamics models and thus similar figures cannot be made.
In appendix B, we show some scalability results for the number of particles used during inference. Interestingly, models can be trained with few particles to limit computation requirements during training, but then many particles can be used during inference to increase accuracy. In this work, we focus on algorithmic innovations over prior differentiable particle filters/smoothers, and apply our MDPS to 3D problems. However, in future work we plan to explore scalability to higher-dimensional problems.
Stein variational inference iteratively transforms particles from a starting distribution to samples from a known target distribution using the score function, and can be thought of as a Monte Carlo method that avoids computing normalizing constants. Although this method has been applied to state estimation [Maken et al. 2022, Fan et al. 2021], these methods assume the prior and dynamics models are fixed and known, and thus could not be applied to our discriminative training scenario where models must be learned from data.
**To answer your questions:**
- In section 4 we say *“Given true dynamics and likelihood models, importance sampling may correct for the fact that smoothed particles are drawn from a mixture rather than a product of filtered densities...”*. We are saying that we wish for samples from the product of densities (hard to sample from) but draw samples from a mixture of densities (easy to sample from) and use importance sampling [21] to compensate for this mismatch by weighting the drawn samples. If the true dynamics and likelihood models are known then importance sampling would produce weighted samples from the true posterior distribution, but since these models are learned the weighted samples are from an approximation of the true posterior distribution. There is no easy way to assess the accuracy of the learned posteriors, since the true distributions are unknown.
- See appendix B for experiments varying the number of particles $M$. Sufficient number of particles are needed to accurately represent the forward and backward filter distributions. $M$ that is too small can negatively impact MDPS by limiting the representational power of the two filters, and thus increasing variance when estimating gradients. Increasing $M$ can only help performance by increasing representation power. We empirically set $M$ to be equal for both filters in order to simplify implementation, however tuning $M$ separately for each direction could potentially provide computational savings.
- Refinement methods require an accurate initial estimate. [11] sets this to be within a 64x64 meter box centered around the true position and within 45 degrees of the true bearing in order for the method to converge. In [9] this box is decreased to 40x40 meters, and the authors state that if the initial state is far from the true state, extracted features from the image and the map will be too dissimilar and the refinement method is often trapped in poor local optima. In contrast, we initialize our MDPS method with large uncertainty (particles randomly within 150 meters from the true state).
**Additional References:**
Y. Wang, Y. Qiu, P. Cheng and J. Zhang, "Hybrid CNN-Transformer Features for Visual Place Recognition," in IEEE Transactions on Circuits and Systems for Video Technology, 2023
Maken, Fahira Afzal, Fabio Ramos, and Lionel Ott. "Stein particle filter for nonlinear, non-gaussian state estimation." IEEE Robotics and Automation Letters, 2022
Fan, Jiaojiao, Amirhossein Taghvaei, and Yongxin Chen. "Stein particle filtering." arXiv preprint arXiv:2106.10568 (2021).
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. While I understand the points I am currently not fully convinced by them. I can see that the proposed method works and could be useful, yet I do not truly see how the application used to showcase it is a good one. At this stage I will not change my score but will keep an open mind during the reviewer discussion. | Rebuttal 1:
Rebuttal: Thank you all for your feedback and helpful suggestions. We want to address a few topics that were referenced by multiple reviewers.
First, we would like to emphasize that our work is the first and only generalization of state-of-the-art differentiable particle filters to the more challenging particle smoothing scenario. It is for this reason that most of our experimental comparisons are not to other particle smoothers, but to state-of-the-art methods for the global visual localization task, where we find substantial advantages in both speed and accuracy.
We agree that some formatting aspects of the paper can be improved. Unfortunately due to space constraints, the size of some figure elements and fonts was limited. In future revisions, we will be sure to make changes to the figures to increase the size of renderings and fonts for better readability (moving some details to appendices if necessary).
With regard to training time, we train all methods to convergence. During training, we monitor the training loss, and decay the learning rate each time a training loss plateau is detected. Training is terminated once the training loss plateaus and the learning rate is sufficiently small that decreasing it has no effect on training. Finally, we use validation data to select the best model to use for evaluation (to prevent overfitting). We find that given this training methodology, additional training time for the various methods would not yield better performance as all methods were trained to convergence (i.e. given as much training time as needed to produce the best model). The improved performance of MDPS over MDPF is not due to additional training time, but rather due to better dynamics and measurement models being learned as well as using the backward filter to propagate information from future observations (i.e., using the full sequence to estimate the state posterior distribution). Once trained, MDPF is very efficient during inference as shown in [14]. Since our MDPS simply runs two MDPFs and an additional combination step, it is also efficient during inference, making real-world use more practical than other methods such as dense search [11]. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Aligning Diffusion Behaviors with Q-functions for Efficient Continuous Control | Accept (poster) | Summary: This paper introduces Efficient Diffusion Alignment (EDA), which draws inspiration from the optimization paradigm of DPO to align the distribution of Diffusion policies with the optimal solutions of constrained policy optimization. Experimental results demonstrate the outstanding performance and fine-tuning efficiency of EDA.
Strengths: - The paper is well-written and easy to follow.
- A standout aspect of the paper is the introduction of Bottleneck Diffusion Models (BDM), which utilize neural networks (NNs) to estimate the log-probability of noisy actions. In diffusion models, it is a classic paradigm to use NNs to estimate the score function (gradient of the log-probability), eliminating the need for gradient computations when solving reverse SDE/ODEs. This is particularly crucial in the application of large diffusion models, such as in image generation tasks. However, for smaller applications like action generation, the BDM approach, although requiring gradient computations on NNs during the denoising process, allows for flexible manipulation of likelihood. I believe this approach could lead to more diverse applications in future works beyond EDA.
Weaknesses: - EDM shows only a marginal performance improvement compared to baselines in the experiments, being slightly better than DQL and QGPO.
- Using EDM in practice may be cumbersome. Firstly, manual preparation of an alignment dataset is required. Additionally, EDA seems to be sensitive to the temperature coefficient $\beta$, which is bound to the training process and cannot be adjusted during inference.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could the authors provide any insights on the selection of the number $K$ of actions in the state-action pairs within the alignment dataset? What is a sufficient value for $K$?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper only conducts experiments on benchmarks from D4RL. Including more real-world application benchmarks, such as Robomimic, may enhance the impact of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Official Response to Reviewer FQV2
We completely agree with the reviewer's assessment of our paper, regarding both the praise and the pointed-out limitations. We thank the reviewer for providing valuable feedback and the reviewer's expertise in delivering such an insightful review, even though some deep motivations of our work were not extensively described in the text.
**Q1: Any insights on the selection of the number K of actions in the state-action pairs within the alignment dataset?"**
**A1:**We would like to refer the reviewer to **Figure 8 in the paper, where we have already conducted ablation studies of K**.
Our experience is that $K>8$ is already sufficient for reasonably good performance in D4RL tasks. For several environments like hopper, $K>100$ will hurt performance (possibly because the Q-model is not good enough).
A practical way to determine $K$ is to first try if rejection sampling can improve the performance of the pretrained behavior model: Selecting the best action with the highest Q-value among $K$ independently-sampled behavior actions.
**Q2: Using EDM in practice may be cumbersome. Firstly, manual preparation of an alignment dataset is required. Additionally, EDA seems to be sensitive to the temperature coefficient β, which is bound to the training process and cannot be adjusted during inference."**
**A2:** We agree these are critical limitations of the EDA method.
1. The $\beta$ needs to be predefined before training. This is indeed a bummer for EDA. There's currently a method (QGPO) where $\beta$, as a diffusion guidance scale, can be tuned during inference, which is realized by training an independent diffusion guidance network. However, QGPO cannot be initialized from pretrained diffusion behavior and thus has to be trained from scratch. We currently do not know how to combine the strengths of QGPO and EDA elegantly and think this could be a very important and fundamental topic. Its meaning is beyond the scope of offline RL but could benefit the entire research field such as LLM alignment/diffusion alignment.
2. How to remove the need for requiring multiple actions for one single state is another crucial topic that we are very interested in. Still, manual preparation of the dataset can be avoided by sampling actions during training in an online manner. However, this is a little bit computationally expensive and we find it not helpful for D4RL performance.
3. We think the sensitivity to $\beta$ is to some extent the problem of human-defined rewards. Since the reward signal is task-specific, it can be very difficult to apply the same hyperparameter to all tasks. We believe this is a common challenge for most RL methods (and perhaps a privilege for preference-based RL).
**Q3: EDM shows only a marginal performance improvement compared to baselines in the experiments."**
**A3:**
5. We'd like to stress that **D4RL is a highly competitive benchmark**. The **upper bound** score for D4RL is 100 and most baselines have achieved 80+. However, EDA still outperforms all baselines in the average performance of 16 tasks.
6. EDA demonstrates **obvious improvement in sample efficiency and convergence speed** (Figure 5). Given only 1% of data, EDA maintains 95% of performance while the best-performing baseline keeps only 80% of performance. **We think this sample efficiency improvement is more critical"**. The high sample efficiency is an essential part for real-world application of the alignment algorithms.
**Q4: Including more real-world application benchmarks, such as Robomimic, may enhance the impact of the paper."**
**A4:** We definitely agree with the reviewer. As a matter of fact, we have considered applying EDA to multi-task, general-purpose, and real-world agents just like Robomimic when we first came up with this idea. The main difficulty is that other benchmarks lack enough diffusion baselines to compare with. Still, this could be a very meaningful research direction.
We hope the reviewer finds it acceptable that we list this as a future work and believe this does not harm the key contribution of the current article.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. Firstly, I want to apologize for my typo in writing "EDA" as "EDM."
I personally really like the experiments on sample efficiency. The fact that EDA shows almost no performance loss with only 1% of the data is quite surprising. I want to hear the authors' in-depth analysis and discussion of this result. in the authors' view, what could be the main reasons behind this? Additionally, I strongly recommend that the authors consider adding a section in the paper for a detailed discussion on this topic. I believe this would bring many interesting insights.
And, I still stand by my judgment. I keep the rating unchanged.
---
Rebuttal 2:
Title: Additional Response
Comment: We thank the reviewer for the prompt reply and the interest in our work.
We think the insight for being able to improve alignment efficiency here is that transforming $\mu$ into $\pi^*$ (alignment) is fundamentally much much easier than learning $\mu$ from scratch (pertaining). In alignment, we are simply trying to "suppress" some bad modes learned during pretraining instead of trying to find new meaningful modes.
The theoretical explanation could be $KL(\mu\|\pi^*)$ << $KL(\mu\|\text{uniform dist})$. In high-dimensional data space, the actual meaningful data is actually very scarce. Imagine if you are trying to sample an image from Gaussian distribution, you wouldn't get a visually realistic image from even $10^{100}$ candidates. However, if you already have a pretrained image-generation model like stable-diffusion, you can easily get a good-looking one from 4-16 image samples.
Why can EDA have high sample efficiency while other diffusion-based algorithms cannot? It is simply because EDA is completely initialized from the pretrained model. There are no network parameters that are required to be learned from scratch. Finetuning a model is much easier than learning a completely new one. Similar successes have already been proved by wide exploration in LLM alignment research.
---
Rebuttal Comment 2.1:
Comment: Thank you for the reply!
So, does the statement "It is simply because EDA is completely initialized from the pretrained model." implies that during the behavior pretraining phase, EDA trains a behavior policy using the entire dataset and then uses 1% of the dataset during the fine-tuning phase? If that's the case, then the high sample efficiency of EDA is indeed easy to understand.
---
Rebuttal 3:
Comment: Yes, though for the behavior pretraining dataset, reward labels are excluded. The diffusion model learns all kinds of behavior, regardless of whether they are good or bad. | Summary: This paper introduces Efficient Diffusion Alignment (EDA) for offline continuous control, combining the preference alignment theories with reinforcement learning. Specifically, EDA bridges the alignment finetuning by representing the diffusion score as a derivative of a scalar neural network with respect to actions, which allows direct density estimation. During fine tuning, DPO is used for policy improvement. In their experiments, EDA has demonstrated both strong performance and good sample efficiency in fine tuning on D4RL.
Strengths: This is overall a novel and interesting paper. The way it connects preference-based optimisation with diffusion is simple yet effective, and allows efficient training of the diffusion-based RL policies. Besides, the authors have also provided theoretical justifications and toy examples for intuitive explanations to help the understanding. EDA also achieves a strong performance on the D4RL benchmark.
Weaknesses: 1/ The presentation has a certain room for improvement. The introduction made me very confused about why we want to do alignment, rather than directly optimising the policy with standard RL. This is never explained in the paper as well. After reading the whole paper, it is clearer to me. I do think this should be improved to make the paper an acceptance.
2/ Although EDA is simple by itself, the overall framework is quite complex as the training has been separated into multiple stages: pretraining of the diffusion policy, pretraining of the Q functions, and final alignment. This actually made the whole framework much more complex compared with the standard offline diffusion RL methods. It would be great if the authors could provide some explanations to the actual training time of EDA, compared with the baselines.
3/ Normally for pretraining and alignment, we are referring to training general purpose agents, and performing task-specific alignment. However, the current form of EDA doesn’t seem to support training general purpose agents that solve tasks with different action dimensions. Instead, it seems the current experiments are all conducted in a single-task manner, rather than multi-task settings. This actually hinders the contribution of the work.
4/ Considering the complexity of the framework, the proposed EDA doesn’t seem to provide much performance improvement compared with the baselines (83.7 overall return of EDA compared with 82.1 of DQL).
Technical Quality: 3
Clarity: 2
Questions for Authors: 1/ I believe this is a general framework that converts the policy optimization problem to alignment. Have you tried other alignment methods and how they perform?
2/ For a fixed environment but different offline datasets, have you tried pretraining on all data then performing alignment?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: As discussed in the weaknesses, although the current method claimed to perform general pretraining, it seems to be performing single-task pretraining and alignment. Also, the overall pipeline is more complex and time consuming than the standard diffusion-based offline RL methods, and didn’t seem to provide much performance improvement. Nevertheless, I still think this is an interesting work, and I’ll vote for a weak acceptance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Official Response to Reviewer y7D4
We truly appreciate the reviewer for the valuable suggestions and comments. Concerns are mainly about the computational complexity of EDA and the experimentation/application of the algorithm.
**Q1: Could provide some explanations to the actual training time of EDA, compared with baselines."**
**A1:**
**Actual time tested on NVIDIA A40**:
|Method|Behavior pretraining|Critic training|Policy Alignment|Dataset|Overall|
|---|---|---|---|---|---|
|Ours|3.4h (1M steps)|1.4h (1M steps)|0.07h (20k steps)|0.6h|5.5h|
|IDQL|1.9h (1M steps)|1.4h (1M steps)|-|-|3.3h|
|QGPO|1.9h (1M steps)|4.2h (1M steps)|3.2h (1M steps)|1.0h|10.3h|
|DiffusionQL|-|4.1h (1M steps)|2.9h (1M steps)|-|7h|
Overall the inference/training speed of BDM is less than 2 times of normal diffusion with the same model size. However, its advantage is
**Density estimation**: **20+ times faster** than diffusion models. Similar to Gaussian and EBM models.
**Alignment:** **Negligible computation** because EDA allows finetuning the model for very few steps instead of learning a brand-new model from scratch.
**Q2: The overall pipeline is more time consuming than the standard diffusion-based offline RL methods"**
**A2:** We respectfully disagree with this comment. From **A1** we can see that in computational demand **EDA roughly matches other diffusion-based baselines**, and is somewhat more efficient because it has negligible policy alignment computation.
**Q3: EDA should support training general-purpose agents instead of conducting all experiments in a single-task manner. "**
**A3:** We couldn't agree more with the reviewer on this comment. This is actually the very first "hidden" motivation of our paper. As a matter of fact, we considered applying EDA to multi-task settings and building general-purpose agents when we first came up with this idea, though we indeed faced some difficulties:
1. Since EDA is a pretty new idea, before moving on to large-scale/real-world experiments, we have to convince the research field as well as ourselves that it is actually effective and competitive. This means comparing with existing SOTA diffusion baselines in well-recognized continuous control tasks, where D4RL is the predominant benchmark. To our knowledge, there are **very few public benchmarks with a sufficient number of diffusion baselines** for meaningful comparisons besides D4RL.
2. We focus on the theoretical derivation and motivation of the EDA algorithm in this paper. **Applying EDA for general-purpose multi-task learning would involve extensive engineering practices like data collection and task definition that are mostly orthogonal to the core focus of our original article.** This might divert the attention from the theoretical contributions of our paper, which we aim to emphasize.
Overall, while we highly recognize the value of general-purpose experiments, we sincerely hope the reviewer finds it acceptable that we list this as a future work of our paper. **We believe this does not harm the key contribution of the current article.**
**Q4: Although EDA is simple by itself, the overall framework is quite complex (3 stages). This actually made the whole framework much more complex compared with the standard offline diffusion RL methods."**
**A4:**
Standard offline diffusion RL methods also require at least 2 stages of training (IQL/AWR/DiffusionQL). Some work also requires 3 stages (BCQ/BEAR/BRAC/QGPO)
LLM training pipeline also requires three stages for training (Pretrain, SFT, alignment).
**There are multiple stages of training because we need to handle distinctive data type and data amount in real-world application**
For instance, in autonomous driving, we can collect all driving behaviors from drivers, which creates a feasible continuous action space and is helpful for learning a foundational end-to-end model. This requires large-scale pretraining.
However, we may only have enough human resources to label all those very harmful actions that lead to car crashes and preferred actions that save drivers' lives. These data require efficient alignment.
**Q5: I believe this is a general framework that converts the policy optimization problem to alignment. Have you tried other alignment methods and how they perform?"**
**A5:** We compare other classic preference-based LLM alignment methods including DPO, SimPO, and IPO in our initial experiments. There's no very specific number but basically, they all perform similarly and DPO should be chosen given its simplicity and theoretical elegancy.
Also, as our ablation results in Figure 6 have pointed out, we find that value-based alignment (EDA) outperforms preference-based approaches such as DPO.
**Q6: The improved performance seems marginal compared with the baselines."**
**A6:**
1. We'd like to stress that **D4RL is a highly competitive benchmark**. The **upper bound** score for D4RL is 100 and most baselines have achieved 80+. However EDA still outperforms all baselines in average performance of 16 tasks.
2. EDA demonstrates **obvious improvement in sample efficiency and convergence speed** (Figure 5). Given only 1% of data, EDA maintains 95% of performance while the best-performing baseline keeps only 80% of performance. **We think this sample efficiency improvement is more critical"**. The high sample efficiency is an essential part of the real-world application of the alignment algorithms.
**Q7: For a fixed environment but different offline datasets, have you tried pretraining on all data and then performing alignment?"**
**A7:** Unfortunately no, this is because for D4RL, datasets for a single robot are largely overlapping/reused, for instance, the hopper-ME dataset is already a mixture of two datasets and hopper-M dataset is exactly its subset. Concatenating them together has not much empirical meaning.
---
Rebuttal Comment 1.1:
Comment: I thank the reviewer for the detailed responses and explanations. My concerns are well addressed and I will increase my rating to 7.
---
Rebuttal 2:
Comment: We are glad that the reviewer is happy with our responses! We also appreciate the reviewer for the prompt and positive feedback. | Summary: This paper introduces Efficient Diffusion Alignment (EDA) for solving offline reinforcement learning problems. The approach involves first training a behavior cloning model using only state-action data, without reward information. Subsequently, the model is fine-tuned with rewards using DPO. During the reward-free training phase, a diffusion model is used as the policy network. Since the diffusion model doesn't provide the likelihood of the predicted action, it is modified to predict a scalar representing the density/energy of the action. The noise predicted by the model is computed by backpropagating to the input actions. Experiments are done on D4RL dataset, the paper shows that the proposed method can greatly improve the training efficiency.
Strengths: - The paper is well-written and clearly explains the proposed method.
- The approach appears to be novel.
Weaknesses: 1. If I understand correctly, the proposed method needs backpropagation to compute the predicted noise and then another backpropagation to update the network. Would this require computing higher-order gradients?
2. If so, there is a lack of experiments and discussion about the efficiency and robustness of the method given the potential computational cost and sensitivity of computing higher-order gradients.
3. The method is only evaluated on the D4RL benchmark.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Does Equation 10 require computing the gradient of the gradient for the proposed Bottleneck Diffusion Models? If so, this aspect should be experimentally evaluated and discussed.
2. Why was the diffusion model chosen as the policy network? Could other energy-based generative models, or even simpler models like GMM, be used to obtain data likelihood?
3. What are the benefits of using the proposed method over alternative methods that apply DPO on diffusion models in the image generation field, such as Diffusion-DPO?
Reference: Wallace, Bram, et al. "Diffusion model alignment using direct preference optimization." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - The paper lacks comparison with other energy-based generative models and simpler methods for obtaining data likelihood.
- The proposed method is only evaluated on the D4RL benchmark; additional benchmarks would strengthen the experimental validation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Official Response to Reviewer jAma
**Q1: Does EDA require backpropagation to compute the predicted noise and then another backpropagation to update the network? Would this involve computing higher-order gradients?**
**A1:** Yes. The pretraining of BDM requires two backpropagation steps for a single training iteration.
**However, we don't think it requires computing higher-order gradients** (such as the computationally expensive Jacobian Matrix).
The BDM loss gradient simplifies to:
$\frac{\partial L_\theta}{\partial \theta} = \frac{\partial }{\partial \theta} \|\frac{\partial }{\partial a_t}f_\theta(a_t,s, t)-\epsilon\|^2$
Higher order gradient $\frac{\partial^2 f_\theta}{\partial \theta \partial a_t}$ is computationally expensive. However, such **Jacobian calculations can be automatically avoided by PyTorch** in practice.
The basic logic is that: A computational graph can be constructed when calculating $\frac{\partial f_\theta}{\partial a_t}$. PyTorch will just regard $\frac{\partial f_\theta}{\partial a_t}$ as any normal neural network, except that it is 2-times larger. **It is convenient to understand the proposed BDM models jus as a Normal network** with 2-times computation. (Figure 2 in paper)
**Q2: Lack of experiments and discussion about the efficiency and robustness given the potential computational cost and sensitivity of computing higher-order gradients. "**
**A2:**
**computational cost: EDA roughly matches other diffusion-based baselines**, and is somewhat more efficient because it has negligible policy alignment computation. The inference/training speed of BDM is less than 2 times of normal diffusion with the same model size.
| Method | Behavior pretraining | Critic training | Policy Alignment | Dataset | Overall |
|---|---|---|---|---|---|
| Ours|3.4h (1M steps) | 1.4h (1M steps) | 0.07h (20k steps) | 0.6h | 5.5h |
| IDQL|1.9h (1M steps) | 1.4h (1M steps) | - | - | 3.3h |
| QGPO | 1.9h (1M steps) | 4.2h (1M steps) | 3.2h (1M steps) | 1.0h | 10.3h |
| DiffusionQL | - | 4.1h (1M steps) | 2.9h (1M steps) | - | 7h |
**robustness:** including derivative in loss function has **already been verified by various research fields**. Application in diffusion modeling for image generation: [1] Application in Training PINN : [2]
[1] Should EBMs model the energy or the score? Tim Salimans, Jonathan Ho
[2] Scientific machine learning through physics–informed neural networks.
**Q3: Could other energy-based generative models, or even simpler models like GMM, be used to obtain data likelihood? Why diffusion models exactly?" Lacks comparison with other energy-based generative models**
**A3:**
1. We refer the reviewer to **Appendix C in the paper, where we have already compared various generative models**, including VAE and EBM.
2. Simper generative models could potentially be also effective in continuous control. However, at present many research consistently indicate that **diffusion models outperform other generative methods.** ([2] (Figure 1), [3] (Table 4), and [4] (Figure 1)). They are also successfully deployed in real-world robots [1].
3. We conducted some preliminary experiments applying GMM to EDA, with the results below (mixture number = 100, EM training, 2 seeds):
| |Average| Half-ME | Half-M | Half-MR | Hop-ME | Hop-M | Hop-MR | Walk-ME | Walk-M | Walk-MR |
|---|--|---|--|--|--|---|--|--|---|---|
| Diffusion BC| 67.0 | 73.9 | 47.9 | 42.2 | 71.1 | 63.9 | 69.9| 98.9 | 68.7 | 66.5 |
| GMM BC| 63.7 | 68.2 | 45.3 | 44.7 | 56.6 | 64.3 | 67.4| 97.0 | 68.7 | 65.0 |
| **Diffusion+EDA**| **87.3**|93.2|57.0|51.6| 104.9 | 98.4 | 92.7 | 111.1 | 87.4 | 89.2 |
| **GMM+EDA** |28.3|1.8|40.2|36.6|3.0|33.1|2.2|47.5|62.0|28.0|
[1] Octo: An Open-Source Generalist Robot Policy
[2] DiffusionQL paper
[3] Offline Reinforcement Learning via High-Fidelity Generative Behavior Modeling ICLR 2023
[4] Imitating Human Behaviour with Diffusion Models Neurips 2023
**Q4: The method is only evaluated on the D4RL benchmark. Additional benchmarks would strengthen the experimental validation. "**
**A4:** We definitely agree that additional results from other benchmarks could enhance the validation of our experiments. However, given the limited time for rebuttal, we face some practical dilemma:
1. Our primary goal is to demonstrate the effectiveness and competitiveness of our proposed algorithm. This involves comparisons with state-of-the-art diffusion baselines in the field of offline reinforcement learning, where D4RL is the predominant benchmark. To our knowledge, there are **very few public benchmarks with a sufficient number of diffusion baselines** for meaningful comparisons besides D4RL.
2. Our experiments already covers **16 tasks** within the D4RL benchmarks, covering **3 distinct fields**: Locomotion, Navigation, and Manipulation. Additionally, we provide illustrative 2D experimental results in **6 more tasks**. Considering that the core contribution of this paper is somewhat theoretical, we believe this is sufficient to validate the effectiveness and support the main claims of our method.
**Q5: What are the benefits of using the proposed method over alternative methods that apply DPO on diffusion models in the image generation field, such as Diffusion-DPO? "**
**A5:**
1. Most image diffusion alignment methods, including Diffusion-DPO, are preference-based. **In offline RL, the concept of rewards is essential, and such rewards are usually continuous instead rather than binary.** Aligning pretrained policies with continuous rewards is both theoretically and practically significant.
2. Different from image generation, it is **more important in RL to have an efficient way for calculating data density**, which current diffusion models do not provide. PPO/SAC/REINFORCE all require calculating data prob. The proposed BDM models introduce a novel approach to address this issue, potentially expanding the application of diffusion models in RL, as also noted by reviewer FQV2.
---
Rebuttal Comment 1.1:
Comment: I have reviewed the rebuttal, and my concern has been addressed.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback! We are glad that our responses help. | Summary: Diffusion models have shown impressive results in solving real-world problems. This paper builds on the success of large language models (LLMs) to enhance the development of diffusion models. Specifically, they introduced Efficient Diffusion Alignment (EDA), a pipeline to train diffusion models in two stages similar to LLM: pretraining and finetuning. In the pretraining stage, they proposed a bottleneck diffusion model (BDM), which modifies the score function from predicting control actions to predicting the scalar value of the control action. With this pre-trained score function, they suggest performing policy optimization by aligning the pre-trained diffusion model to Q-function values. The authors conducted experiments on various environments and compared the results to several relevant baselines. This paper's proposed EDA pipeline outperforms all the previously proposed algorithms for aligning diffusion models and provides empirical insights into why their algorithm works well. In particular, they demonstrated that their algorithm is very sample efficient, requiring very few samples to learn successfully, and they showed that their algorithm could be combined with various Q-learning methods successfully.
Strengths: - The paper addresses an important problem regarding alignment with diffusion models.
- The paper performs several ablation studies to showcase the importance of key algorithm design choices.
- The policy optimization procedure to minimize the q-values between optimal and reparameterization q-function is interesting and works well in practice.
Weaknesses: - The paper needs more clarity regarding the difference between behavior density estimation and the normal diffusion model.
- The paper needs more clarity about how the pre-trained scalar network is trained.
- The experiment results for other baselines are missing standard deviation bars, so it is hard to tell if the results are significant, given that some baselines are extremely close in performance.
- The value and preference optimization experiments need to include the derivation of what you optimized when k>2 for the preference experiments.
No ablation experiments compare the proposed BDM method with traditional conditional diffusion models.
- There are no vanilla BC experiments in the results presented.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Is the Behavior density estimation model a value network instead of a policy? Because you are essentially predicting a scalar value given state and action at a particular time.
- How are you minimizing equation (10) if your BDM model outputs a scalar but your Gaussian noise variable \epsislon is a vector of dimension the number of actions?
- What do you mean by the bottleneck value being expanded back to R^|A| through back-propagation? The input to the function f is an a_t, which is a scalar, not \bold{a}_t, which is a vector.
- Does f^*=\nabla_{\bold{a}_t} \log\mu_t(a_t|s,t)? If so, what is the difference between \epsilon_\phi and \f_\phi?
- Is a_{t} = \bold{a}_t * e_{i}, where e_i is a vector of the standard basis?
Should the equation on line 154 be Q^* instead of Q?
- Does f^\pi and f^\mu both output scalar values, not vector corresponding actions? If so, how are you in equation (13) outputting a vector value?
- In table 1, how does vaniila BC and vanilla diffusion BC perform?
- Line 242, there is a typo; you meant to say figure 4, not table 4.
- On line 254, you should reference the figure in the text so the reader can correlate what the text is saying with the results in the figure.
- What is the sample efficiency of the diffusion-QL baseline? Because it is not included in Figure 5(a).
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Official Response to Reviewer 2HKR (1/2)
We thank the reviewer for the very detailed feedback! The reviewer's concerns are twofold: 1. confusion regarding what the BDM model is trying to model/learn and how to train it; 2. the comprehensiveness of the experiments. We hope our explanations and the newly added experimental results can help address these concerns.
**Q1: Needs more clarity regarding "the difference between behavior density estimation and the normal diffusion model" and "how the pre-trained scalar network is trained. "**
**A1:**
Suppose our dataset distribution is $\mu(a|s)$.
| | Classic Gaussian Policy | Normal diffusion | Bottleneck Diffuson (ours)|
|--|--|--|--|
| network | $\pi_\theta(s):\mathcal{S} \rightarrow \mathcal{A}$ | $\epsilon_\theta(a_t,s, t)=a_{t-1}:\mathcal{A} \times \mathcal{S} \times \mathbb{R} \rightarrow \mathcal{A}$ | $f_\theta(a_t,s, t): \mathcal{A} \times \mathcal{S} \times \mathbb{R} \rightarrow \mathbb{R}$ |
| predicts| $\pi_\theta^*(s)=\mu(s)$| $\epsilon_\theta^*(a_t,s, t) = \nabla_{a_t}\log\mu_t(a_t\|s)$ | $f_\theta^*(a_t,s, t) = \log\mu_t(a_t\|s) + C(s)$ |
| predicts| average **action** | **score function** of action distribution | behavior action **density** (log probs) |
| sample action | $a =\pi_\theta(s)$| $a_{t-1}=\epsilon_\theta(a_t,s, t) + \text{noise};a=a_0$ | $a_{t-1}=\nabla_{a_t}f_\theta(a_t,s, t) + \text{noise};a=a_0$ |
| sampling type| direct| iterative | iterative |
| Expressive model?| No| Yes | Yes|
| Allow density calculation?| Yes | No | Yes|
| training| $\|\pi_\theta(s)-a\|^2$| $\|\epsilon_\theta(a_t,s, t)-\epsilon\|^2;a_t=a_0 + \sigma_t\epsilon;\epsilon\sim\mathcal{N}(0, I)$ | $\|\nabla_{a_t}f_\theta(a_t,s, t)-\epsilon\|^2;a_t=a_0 + \sigma_t\epsilon;\epsilon\sim\mathcal{N}(0, I)$ |
The **only difference** between normal diffusion models and our BDM is as follows: the diffusion model $\epsilon_\theta$ directly estimates the score function (outputting a vector of dimension $\mathcal{A}$), while BDM $f_\theta$ uses its derivative $\nabla_{a} f_\theta$ to estimate the score function. Thus, $f_\theta$ outputs a scalar, but $\nabla_{a} f_\theta$ is a vector of dimension $\mathcal{A}$. **Therefore, we can think of BDMs as exactly normal diffusion models with a different model architecture.** (See Figure 2 in the paper).
The training methods remain exactly the same.
**Q2: Is the Behavior density estimation model a value network instead of a policy?**
**A2:** Following **A1** the BDM model $f_\theta$ is a learned behavior policy, but it can also be used to estimate the behavior density for a given action.
**Policy**: When considering $\epsilon(a_t,s,t):= \nabla_{a_t} f_\theta(a_t, s, t)$, which is easily calculatable by Pytorch through back-probagation. Then BDM is exactly a diffusion policy that can be used to sample an action. **This feature is mainly used during pretraining and evaluation.**
**Value**: Without the partial derivative, $f_\theta(a_t, s, t) \approx \log \mu_t(a_t|s)$ , the BDM model is like an energy-based model that can be used to estimate action likelihood. **This is mainly used during the alignment phase.**
**Q3: Does $f^\pi$ and $f^\mu$ both output scalar values, not vector corresponding actions? If so, how are you in equation (13) outputting a vector value?**
**A3:** Yes, $f^\pi$ and $f^\mu$ both output scalars. We may not fully understand the reviewer's question but the output of Eq 13, including $Q_{\theta}$, $\pi_{t, \theta}(a_t|s, t)$ and $\mu_{t, \phi}(a_t|s, t)$ are indeed all scalars.
Original Eq 13 in paper:
$Q_{\theta}(s, a_t, t) := \beta \log \frac{\pi_{t, \theta}(a_t|s, t)}{\mu_{t, \phi}(a_t|s, t)} + \beta \log Z(s) = \beta [f_\theta^\pi(a_t|s,t) - f_\phi^\mu(a_t|s,t)] + \beta [\log Z(s, t)- C^\pi(s, t) + C^\mu(s, t)]$
**Q4:Does $f^ * =\nabla_{a_t} \log\mu_t(a_t|s,t)?$ If so, what is the difference between $\epsilon_\phi$and $f_\phi$?$**
**A4:** **No.** $f_\mu^* =\log\mu_t(a_t|s,t)$ while $\epsilon_\mu^* =\nabla_{a_t}\log\mu_t(a_t|s,t)$. $\epsilon_\phi$ is actually a notation for **normal** diffusion model. You can see that $\epsilon_\phi$ is rarely used when describing BDM models.
**Q5: What do you mean by the bottleneck value being expanded back to R^|A| through back-propagation? The input to the function f is an a_t, which is a scalar, not a_t, which is a vector.**
**A5:** The input $a_t \in \mathcal{A}$ of $f_\theta$ is a **vector**. (explained in **A1** and **A2**)
Stage 1: $f_\theta(a_t,s, t): \mathcal{A} \times \mathcal{S} \times \mathbb{R} \rightarrow \mathbb{R}$ takes in this vector and squeezes $a_t$ into a single scalar, which we called bottleneck energy.
Stage 2: When calculating $\nabla_{a_t} f_\theta(a_t,s, t)$, we perform back-propagation and the output is again a vector of dimension $\mathcal{A}$. **(bottleneck value being expanded back to R^|A|)**
Vector -> Scalar->Vector.
It is just like a U-Net. **Figure 2 (left) in the paper** is a more illustrative explanation.
**Q6: No vanilla (diffusion) BC experiments in the results presented.**
**A6:** We thank the reviewer for this suggestion and present additional experimental results below.
||Average|HalfCheetah-ME|HalfCheetah-M|HalfCheetah-MR|Hopper-ME|Hopper-M|Hopper-MR|Walker2d-ME|Walker2d-M|Walker2d-MR|
|--|---|--|----|--|--|--|---|---|-------|---|
|**EDA(ours)**|**87.3**|93.2±1.2|57.0±0.5|51.6±0.9|104.9±7.4|98.4±3.9|92.7±10.0|111.1±0.7|87.4±1.1|89.2±5.5|
|DiffusionBC|67.0|73.9±28.5|47.9±3.8|42.2±6.5|71.1±37.1|63.9±15.0|69.9±28.2|98.9±25.1|68.7±24.5|66.5±9.9|
|BC|43.0|35.8|36.1|38.4|111.9|29.0|11.8|6.4|6.6|11.3|
*BC results come from D4RL paper.
***
## Reminder
**QA7-QA12:**
**We refer the reviewer to the global rebuttal posted at the top of the webpage for the rest (part 2/2) of our response. This is due to the severe page limit**.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response and explanations. Since most of my concerns have been addressed, I will increase my rating to a 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback! We are glad that our responses help. | Rebuttal 1:
Rebuttal: # Rebuttal Summary
We would like to thank all the reviewers for their valuable comments. We are encouraged to see all reviewers recognize the theoretical novelty of our work. Reviewers FQV2 and y7D4 highlight the critical importance of the problem with diffusion models that we are trying to solve. They also point out the vast potential of the proposed BDM model. Concerns primarily relate to the method's computational efficiency, the clarity of the paper, and limitations in the D4RL experiments.
Below, we summarize the main actions taken during the rebuttal:
1. Provided detailed computational efficiency experiment results for EDA and compared these with several diffusion-based baselines.
2. Conducted additional experiments to offer more baseline results, such as BC and diffusion BC.
3. Additionally compare with Diffusion-QL for sample efficiency.
4. Conducted an extra ablation study on various generative models (switching from diffusion models to GMMs/EBMs).
5. Rigorously revised the manuscript for a clearer narrative.
6. Clarified several confusions or misunderstandings regarding our paper.
We look forward to further discussions with the reviewers!
---
|
|
|
***
# Official Response to Reviewer 2HKR (2/2)
continue due to page limit
**Q7: Experiment results for other baselines are missing standard deviation bars**
**A7:** We did not report std bars for other baselines because:
1. It is a **common practice for most previous work** (e.g., Diffuser, QGPO, DiffusionQL, IDQL) to not report std bars of other work. This is mainly due to the severe page limit.
2. In Table 1. The experimental results for baselines are cited from previous work. Different work may use different metrics to calculate std [1] (std, max-min, 0.5 * std, etc.). Some work (IDQL) simply do not report std.
Nonetheless, we present the results with std bar for the two highest-related work which share the same evaluation metric as ours:
|Algorithm |Half-ME|Half-M|Half-MR|Hop-ME|Hop-M|Hop-MR|Walk-ME|Walk-M|Walk-MR|Ant(U)|Ant(UD)|Ant(MP)|Ant(MD)|Kit-C|Kit-P|Kit-M|
|----|---|---|--|--|---|---|---|--|--|---|--|--|---|--|---|---|
|EDA(ours)|93.2±1.2|57.0±0.5|51.6±0.9|104.9±7.4|98.4±3.9|92.7±10.0|111.1±0.7|87.4±1.1|89.2±5.5|93.0±4.5|81.0±7.4|79.0±4.2|84.0±8.2|81.5±7.3|69.3±4.6|65.3±2.2|
|DiffusionQL|96.8±0.3|51.1±0.5|47.8±0.3|111.1±1.3|90.5±4.6|101.3±0.6|110.1±0.3|87.0±0.9|95.5±1.5|93.4±3.4|66.2±8.6|76.6±10.8|78.6±10.3|84.0±7.4|60.5±6.9|62.6±5.1|
|QGPO|93.5±0.3|54.1±0.4|47.6±1.4|108.0±2.5|98.0±2.6|96.9±2.6|110.7±0.6|86.0±0.7|84.4±4.1|96.4±1.4|74.4±9.7|83.6±4.4|83.8±3.5| | | | |
**Q8: No ablation experiments compare the proposed BDM method with traditional conditional diffusion models.**
**A8:** We note that 4 out of the 8 selected baselines in Table 1 (Diffuser, DiffusionQL, IDQL, QGPO) are based on traditional diffusion models. For instance, EDA(ours) and IDQL share the same Q pretraining method and behavior model architecture, making IDQL an appropriate ablation baseline for comparison with EDA.
Furthermore, we clarify that **traditional diffusion models cannot be used in our proposed alignment method**. These models estimate the score function but do not estimate behavior density, which is required by our loss function in Eq. 14 (see **A1**). Therefore, a direct ablation experiment is not feasible. **This is the very motivation for us to propose the BDM model**.
**Q9: It is hard to tell if the results are significant, given that some baselines are extremely close in performance**
**A9:** We respectfully disagree with the reviewer's comment.
1. **D4RL is a highly competitive benchmark**. The selected baselines like IDQL, Diffusion-QL, and QGPO are recognized state-of-the-art methods, and EDA outperforms each by at least 2% in overall performance. Given the inherent randomness in RL tasks and the diversity across 16 tasks, it is natural for performance between EDA and other SOTA methods to be close on some tasks.
2. EDA demonstrates **significant improvements in sample efficiency and convergence speed** (Figure 5). With just 1% of the data, EDA retains 95% of performance while the best-performing baseline maintains only 80%. **This improvement should not be considered as "close performance."** Such high sample efficiency is crucial for real-world applications of alignment algorithms.
3. Furthermore, EDA clearly surpasses preference-based methods (83 vs. 76), as detailed in Figure 6.
**Q10: Sample efficiency of the diffusion-QL baseline.**
**A10:** We conduct additional experiments studying Sample efficiency of diffusion-QL. Results are averged across 9 Locomotion tasks and 5 random seeds each.
| Algorithm | 100% | 10% | 1% |
|----------------|------------|------------|------------|
| **EDA (ours)** | 87.3 ± 3.5 | 85.6 ± 3.7 | 87.3 ± 5.1 |
| **Diffusion-QL** | 87.0 ± 2.9 | 60.2 ± 6.6 | 53.0 ± 10.7 |
| IQL | 75.7 ± 7.7 | 65.0 ± 7.5 | 52.8 ± 7.2 |
| QGPO | 86.4 ± 3.5 | 74.5 ± 3.7 | 71.2 ± 5.1 |
For fair comparison, we do not use the original critic training method in Diffusion-QL, but switch to the IQL-critic training method, which is consistent with ours and the IQL baseline.
**Q11: Typo in Line 242 (Figure instead of Table).**
**A11:** We appreciate the reviewer's careful reading. The typo has been corrected (though we cannot update the official paper during the rebuttal). We have reviewed the paper to ensure no similar mistakes remain.
**Q12: On line 254, you should reference the figure in the text so the reader can correlate what the text is saying with the results in the figure.**
**A12:** We thank the reviewer for the detailed suggestion. We have now ensured that all figures and tables are referenced before they are discussed in the text. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Boosting Graph Pooling with Persistent Homology | Accept (poster) | Summary: The paper introduces topological graph pooling, a novel method that leverages persistent homology to enhance general graph pooling techniques. The use of persistent homology serves two primary purposes, preserving important topological features in the coarsened graphs and generating edge weights for them. For the first purpose, a topological regularization term comparing the topology of the original and coarsened graphs is developed. For the second purpose, edge weights for the coarsened graph are generated based on an importance score derived from the persistence of cycles containing these edges in the filtration. The authors conducted two sets of experiments to assess the utility of PH in graph pooling. The first set demonstrates that coarsened graphs computed using the proposed PH graph pooling method effectively preserve topological information, even better than other dense and sparse pooling methods. The second set compares and benchmarks the topological pooling enhancement applied to some standard pooling techniques against other pooling techniques and standard graph neural networks. The results, obtained through a comprehensive experimental setup, clearly show the superiority of topological pooling over the other benchmarked methods.
Strengths: This paper is one that evokes a sense of "why didn't I think of that?" due to its natural (but powerful) approach. In my opinion, after addressing a few questions, this paper deserves at least a spotlight at the conference.
**Originality**: As far as I know, the connection between graph pooling and graph filtrations, while conceptually straightforward, has not been explored before.
**Significance**: The persistent homology method addresses a significant problem in graph pooling, as highlighted by the authors. Previously, it was unclear which properties should be preserved in the coarsened graphs after pooling. It was even argued that random pooling could yield performances similar to previous state-of-the-art (SOTA) methods, making unclear why graph pooling was effective and how to design graph pooling techniques appropiately. This work clarifies that preserving the topology of the initial graph is crucial, consistently outperforming other SOTA graph pooling methods across various benchmarks. This makes intuitive sense, as topology concerns global structures, and destroying this information when getting a subgraph seems harmful for the network. This method is not only a clear win for persistent homology in the graph learning field, but also opens new avenues for effective pooling design in GNNs.
**Clarity**: The paper is well-written, with pertinent and informative figures. In particular, Figure 3 effectively illustrates the importance of the contribution.
**Quality**: The experiments are of high quality, encompassing various datasets and configurations. Additionally, the ablation study addressing the effectiveness of topological pooling in preserving input topology is enlightening and supports the use of persistent homology in graph pooling to preserve topology.
Weaknesses: - Figure 1 (a), second row, is strange to me. Usually (and in the pipeline you describe), persistent homology goes from smaller to bigger graphs, and not the inverse path.
- Line 80: Typographical error: "filtraions" should be corrected to "filtrations".
- Equation 1: Missing comma after the formula.
- In lines 114 and 115, you state that filtrations assign each vertex and edge a value. However, this is not always accurate. To have proper simplicial complexes through the filtration, the filtration function must assign lower values to the endpoints of edges than to the values of the edges themselves. Thus, the filtration needs to satisfy $f(\{v_1, v_2\})\geq \max(f(v_1), f(v_2))$.
- In the paragraph starting at line 113, you mention that edges have a filtration value given by $f(\{v_1, v_2\}) = \max(f(v_1), f(v_2))$, but, as stated in the previous comment, you do not restrict edge filtration values to these values. I suggest choosing one approach and modifying this paragraph slightly to ensure consistency.
- In line 123 you state that the persistence diagram $ph(G, f)$ is a set of persistence diagrams $\{D_1, D_2, ...\}$. However, in other parts, such as Equation 6, you imply that $ph(G, f)$ is only one of these persistence diagrams. I believe this is inconsistent.
- Line 147 uses the word "filtration" twice, which seems redundant. I propose rephrasing it to "The core of PH is the notion of filtration, the selection of which presents a challenging task."
**Important points**: The score I give is subjected to solve/discuss at least the following points:
- Lines 157 and 158 are unclear to me. How do you map the persistence diagrams to edge weights given by births and deaths of persistence diagrams points exactly? I think this should be described formally for several reasons. How do you choose the cycle representatives for the points of persistence diagrams? What does it happen if one edge belongs to more than one cycle? I think this is very relevant, even in the proofs.
- In the proof of Theorem 4.1, I do not understand what are the nodes $u$ and $u'$. What are the nodes with different count? Are they unique? Also, the proof relies on understanding the specific function mapping persistence diagrams to edges. Should the proof end at line 537?
- In the proof of proposition 1, what does it mean to be isomorphic for two graphs with features? I have not seen this definition before.
Technical Quality: 3
Clarity: 4
Questions for Authors: - Why do you use the Gumbel-softmax trick? I understand that you obtain a full graph using usual graph pooling, but, precisely, as you are learning filtrations and using persistent homology, you are also capturing in the persistence diagrams the topology of subgraphs that may not need to be fully connected. Did you try to avoid the Gumbel-softmax trick?
- In lines 534, 535, 546, and 541 you write "circles", and not "cycles". Is this intentional? I believe there are more "circles" distributed across the test.
- I have not seen any mention of more general pooling methods, such as cell complex pooling or simplicial complex pooling. Do you think your work can be extended to encompass general pooling on topological deep learning data?
- Have you tried using some fixed filtrations insteaf of learnable filtrations for the graphs benchmark? If so, was there a significant difference compared to learnable filtrations?
- In the complexity section, you mention that persistence diagrams can be computed very efficiently for dimension one. Do you have a reference for this claim? I was aware of this for dimension zero, but I have not encountered it for dimension one before.
- In the Q1 experiments, the filtration is learned during training or is it also fixed as for benchmarking?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The limitations addressed are too simple. I think more effort is needed in this aspect.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Figure 1 (a), second row, is strange
>Figure 1(a) illustrates that PH and pooling naturally share a similar hierarchical structure, thereby aligning well. It does not imply that PH must progress from smaller to larger graphs.
Typographical errors and incorrect expressions
>We thank Reviewer TU8V for the valuable comments on our notations and expressions, which have significantly enhanced the readability of our paper. We have revised all instances according to the suggestions.
In lines 114 and 115, you state that filtrations assign each vertex and edge a value. However, this is not always accurate.
>We thank Reviewer TU8V for the valuable comments on our statement of filtrations. We apologize for the unclear descriptions about filtrations. In our revised manuscript, we will explicitly state at the beginning of this paragraph that filtration on edges is set as $f(v_1, v_2) = max(f(v_1), f(v_2))$.
Inconsistent descriptions of persistence diagrams in line 123
>We thank reviewer TU8V for the valuable comments. In our revised manuscript, we will add a slice to Eq. (6), e.g. $\mathcal{D}_1 = ph(G, f)[1]$, to keep consistency.
Line 147 uses the word "filtration" twice, which seems redundant
>We thank reviewer TU8V for the valuable comments. In our revised manuscript, we will rephrase this sentence as suggested.
Lines 157 and 158 are unclear
>Edge weights are encoded by persistence (lifespan of each edge: |death_time - birth_time|). Therefore, as stated in Lines 156-158, edges do not form cycles have the same birth time and death time, resulting in zero persistence. Each cycle is paired with the edge that created it, and the other edges in this cycle are assigned a dummy tuple value. If one edge belongs to more than one cycle, only the first time it gives birth to a cycle is considered. These settings are proposed and adopted in [1, 2], and we follow this common practice. Formal descriptions are provided in Appendix B.1, Lines 508-513. To enhance clarity, in the final manuscript, we will move this part to the main paper and modify the relevant sections in detail.
In the proof of Theorem 4.1, I do not understand what are the nodes 𝑢 and 𝑢′. What are the nodes with different count? Are they unique? Also, the proof relies on understanding the specific function mapping persistence diagrams to edges. Should the proof end at line 537?
>Here we mean that we have graphs $\mathcal{G}$ and $\mathcal{G}'$, where nodes $u \in \mathcal{G}$ and $u' \in \mathcal{G}$ are nodes with unique label count in WL test. For example, $\mathcal{G}$ contains 2 green nodes, 1 blue node and 1 red node, then node $u$ stands for the green node. The proof of Theorem 4.1 (PH is at least as expressive as WL) ends at line 537, and the subsequent paragraph demonstrates the existence of some cases where PH is more expressive than WL. In our revised manuscript, we will add more details for better clarity.
In the proof of proposition 1, what does it mean to be isomorphic for two graphs with features?
>We apologize for our ambiguous description. Here we consider a graph isomorphic test context, where node features denote the initial node labels.
Why do you use the Gumbel-softmax trick?
>There are two reasons. First, the utilized learnable filtration [1] operates on nodes and edges, treating different weights equally. Second, edge weights may span a wide range in usual graph pooling (see Appendix D for empirical evidence). Therefore, we use the Gumbel-softmax trick to resample the subgraph into an unweighted graph. In our ablation study (Appendix E.5), TIP-NR represents our method without the Gumbel-softmax trick, and it demonstrates inferior performance.
Write "circles", and not "cycles".
>We apologize for our inconsistent expressions. Both words refer to the same concept, and we will replace "circles" with "cycles" in our revised manuscript.
I have not seen any mention of more general pooling methods, such as cell complex pooling or simplicial complex pooling. Do you think your work can be extended to encompass general pooling on topological deep learning data?
>Thank you for the insightful comment. We believe that our work has the potential to be extended to encompass general pooling in topological deep learning data, as they also exhibit graph structures and one-dimensional topological features. We leave this extension for future work.
Have you tried using some fixed filtrations insteaf of learnable filtrations
>As suggested by Reviewer fBuZ, we consider using an MLP with randomly initialized and fixed parameters as the filtration function, named as **TIP-F**. The experimental results (see Table S2) on four benchmark datasets show that this kind of filtration is not as good as learnable filtrations in our method.
Reference for complexity analysis
>The complexity analysis about persistent homology has been adequately discussed in [2, 3]. The bottleneck of PH computation is dominated by the complexity of sorting all edges, i.e. $\mathcal{O}(m \log m)$, where $m$ is the numer of edges.
In the Q1 experiments, the filtration is learned during training or is it also fixed as for benchmarking?
>In the Q1 experiments, the filtration is learned during training. We only keep a fixed filtration in the evaluation of topological similarity.
The limitations addressed are too simple.
>We apologize for our inadequate discussions of the limitations. In addition to the limitations mentioned in our paper, we have found that our method lacks the ability to discriminate between graphs when the number of connected components is the only distinguishing factor. We provide empirical evidence (see Table S4) and analysis in our response to Reviewer 3dRY and will include this in our revised manuscript.
>[1] Graph filtration learning. ICML 2020.
[2] Topological graph neural networks. ICLR 2022.
[3] Neural Persistence: A Complexity Measure for Deep Neural Networks Using Algebraic Topology. ICML 2019.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for addressing all my comments.
However, I would need to see a revision of proof 4.1, as I still do not properly understand the proof. In abstract simplicial complexes, self-loops are not allowed (you work with sets, not multisets, so you cannot have an abstract simplicial complex with the multiset {{u, u}}). If you do not consider self-loops, then simply take two graphs containing one and two vertices, respectively. As there are no cycles, there are no points in the 1-persistence diagrams of both graphs, so the graphs are not distinguishable by 1-PH but they are by 1-WL. I think you also need dimension zero here, as in [37] (and then you can use the pairing lemma). Can you point to some references working with simplicial complexes where self-loops are admitted? This is definitely not standard.
Regarding the proof of Proposition 1, after carefully rereading it again, I see that you prove that the sum of features is the same, but how can you infer that the two pooled graphs are then the same?
These two results are not relevant to the content of the paper itself, but until I either completely understand them or they are removed, I'm afraid I cannot propose an acceptance of the paper. I'm lowering my score to 4 until these two comments are resolved. Sorry for the inconvenience, I thought these two errors were a problem of my own understanding, but now I'm not so sure.
---
Reply to Comment 1.1.1:
Title: Thanks to Reviewer TU8V and some extra clarification
Comment: Q: In abstract simplicial complexes, self-loops are not allowed.
> Thank you for pointing out our inaccurate description. We acknowledge that standard simplicial complex does not consider self-loops. In our method, we are not trying to modify the computation process of PH to make it adapt to graphs with self-loops. We simply augment the 1-dimensional persistence diagrams by puting the self-loops on the diagonal of persistence diagrams obtained by **standard PH**. Therefore, the principles of simplicial complex and PH are not destroied.
>
>We apologize for our unclear descriptions about this part. In our revised manuscript, we will term the **self-loop augmented 1-dimensional persistence diagram** as $\mathcal{\tilde{D}}_1$, and make clear descriptions such that readers can be aware that $\mathcal{\tilde{D}}_1 = \mathcal{D}_1 + \mathrm{self loops}$ rather than $\mathcal{\tilde{D}}_1 = ph(G+\mathrm{self loops})$.
Q: I would need to see a revision of proof 4.1
> We provide a **proof sketch** of Theorem 4.1. We first assume the existence of a sequence of WL labels and show how to construct a filtration function $f$ from this. Consider nodes $u$ and $u'$ are nodes with unique count in $\mathcal{G}$ and $\mathcal{G'}$, then our filtration is constructed such that their filtration values $f(u)$ and $f(u')$ are unique and different. Consider all three cases: (1)$u$ and $u'$ are both in cycles; (2) $u$ and $u'$ are both not in cycles; (3) one of $u$ and $u'$ is in cycles and the other is not. For all the cases, $f(u)$ and $f(u')$ will be revealed in their respective persistence diagrams. Since $f(u)$ and $f(u')$ are unique and different, we can use the persistence diagrams to distinguish the two graphs.
>
> We apologize for our unclear proof of the theorem. Note that in the three cases considered above, **self-loops and cycles are still considered separately**. Therefore, **our proof of Theorem 4.1 is based on the self-loop augmented 1-dimensional persistence diagram, which is still within the scope of abstract simplicial complex**. In our revised manuscript, we will first give a proof sketch, and then reorganize our proof into several steps for better clarity.
Q: In Proposition 1, the sum of features is the same, but how can you infer that the two pooled graphs are then the same?
> As we consider labelled graphs (with node features), Proposition 1 is about the isomorphic invariant property over such graphs, where we assume that two graphs are isomorphic, i.e. for two graphs $G_1$ and $G_2$ with nodes features $X$ and $Y$, there exists a permutation $P$such that $X = P(Y)$ . It's clear that $\sum{X_i} = \sum{Y_i}$ at this point. The proof of Proposition 1 (Eq. (9)) states that given equal sum of node features (basic settings of isomorphic graphs), the sum of node features are still equal after pooling. **As stated in Lines 551-556, all the operations are permutation equivariant, so after pooling the graph connectivities are still isomorphic to each other.** Therefore, the pooled graphs are still isomorphic to each other.
>
>We apologize for our unclear descriptions that cause ambiguity. In our revised manuscript, we will add an additional paragraph to emphasize the permutation equivariant property of all operations in pooling, specifically:
(Informal proof) Let $(G:=(N,E),A)$ be a graph with connectivity $A$. Consider pooling $A' = B A B^\top$, where $B^\top=\mathrm{GNN}(G)$ is an assignment. If we permute $G$ using a permutation matrix $P$, then $B^\top P^\top = \mathrm{GNN}(GP)$ (because GNNs are permutation-equivalent for input node-level tasks) and the permutated connectivity is $PAP^\top$. Thus the permuted graph after pooling is $A'=B^\top P^\top P A P^\top PB$, which means that isomorphic graphs after pooling are still isomorphic.
>
>We hope that this can help readers understand that after pooling the graph connectivities are still isomorphic to each other.
> Thank you for your feedback and for bringing these concerns to our attention. We apologize for any confusion these results may have caused. We have carefully reviewed the issues you've highlighted and will make the necessary revisions to either clarify or remove the results in question. Your insights are important to us, and we will work diligently to address them in our revised submission.
---
Rebuttal 2:
Title: Thanks to Reviewer TU8V and some extra clarification
Comment: Thank you for your understanding and for raising the score. We appreciate your careful consideration of Theorem 4.1 and will revise the statement to ensure it accurately reflects the necessary conditions.
Regarding your final question, we apologize for any lack of clarity in our previous explanation. Initially, we consider a graph level task (e.g. graph classification). In this case, we consider if the sum of node features are invariant for any permutation of the graph, then it is isomorphism invariant (refer to Equation (9)). Now we provide more general proof of the invariant property.
**Proof.** Following the notations in our last response, we prove isomorphic invariance at feature level and *connectivity* level.
For **feature-level invariance**, let $X \in \mathbb{R}^{n \times d}$ be the node features, $P \in \\{0,1\\}^{n \times n}$ be the permutation matrix, $B\in \mathbb{R}^{n' \times n}$ be the assignment matrix, and $P^{\top}X$ be the permutated node features. The node feature map after pooling is denoted as $X' \in \mathbb{R}^{n' \times d}$, then we have
$$X' = BX$$
If we permute $G$ using a permutation matrix $P$, the permutated node features after pooling are $$X' = (BP)(P^{\top}X)$$
which proves the isomorphism invariant property of pooling at feature level.
For **connectivity-level invariance**, we copy our previous responses here.
$$A'=(B P) (P^\top A P) (P^\top B^\top)$$
Note in our previous response with *informal proof*, we made a mistake in $A'=B^\top P^\top P A P^\top PB$ due to a rush, where $B^\top P^\top$ does not has a compatible dimension. Besides, for symmetric connectivity $A$, $PAP^\top=P^\top AP$. We correct it in this version.
This completes the proof.
We will revise our paper up to this new proof accordingly. Thank you once again for your valuable feedback and support. | Summary: This paper proposes a TDA-based mechanism for storing phase information in the pooling layer of a GNN. The proposed approach resamples the connections in the graph and scales the edge weights using persistence information from a one-dimensional persistence diagram. Experiments on artificial data for concept evaluation and on several benchmark datasets were also carried out.
Strengths: - The retention of phase information in the pooling layer is a new and interesting approach.
- Experiments have shown that performance can be improved.
Weaknesses: - The proposed method uses a 1-dim betti number, but its validity is not known. All 1-dim death times seem to be the same (maximum time for filtration) and seem to be only information about the last connection of the cycle. Only limited information on the cycle seems to be stored and is less convincing.
- Learning to preserve topology information in persistent diagrams is proposed in [1]. (It would be inappropriate not to mention this.) This paper vectorises PDs for training, but [1] and [2] show the possibility of learning in a more data-loss-free form, which lacks consideration. The novelty of the method also seems to be limited to the introduction of phase conservation in the pooling layer.
- The paper only considers TU Datasets, ZINC and OGB, which have become mainstream in recent evaluations, should also be included in this evaluation.
[1] Topological Autoencoders, ICML 2020
[2] Optimizing persistent homology based functions, ICML 2021
Technical Quality: 2
Clarity: 2
Questions for Authors: See Weakness
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1: The proposed method uses a 1-dim betti number, but its validity is not known. All 1-dim death times seem to be the same (maximum time for filtration) and seem to be only information about the last connection of the cycle. Only limited information on the cycle seems to be stored and is less convincing.
>It's common practice to use PH to characterize 0 and 1 dimentional betti numbers and intergrate with GNNs [1, 2, 3, 4], which has been proven effective. Following the practical implementations in previous works mentioned above, the 1-dim death times are equal and set as a large constant value, but their birth times vary according to their filtration values, resulting in different persistences (|death_time - birth_time|). We adhere to the conventions in [1] and utilize persistence rather than extended persistence for efficiency.
W2: Learning to preserve topology information in persistent diagrams is proposed in [5]. (It would be inappropriate not to mention this.) This paper vectorises PDs for training, but [5] and [6] show the possibility of learning in a more data-loss-free form, which lacks consideration. The novelty of the method also seems to be limited to the introduction of phase conservation in the pooling layer.
>We thank the reviewer for providing another perspective and evidence to support our claim that preserving topology information is meaningful in pooling. Our motivation arises from the observation that PH and graph pooling naturally aligns well, as evidenced in Figure 1. Preserving topology information in general can also be considered as graph pooling, as noisy topology is filtered out and essential parts are preserved. This is the major innovation of our method. We apologize for the unclear statement of our motivation and will reorganize and include new references in our revised manuscript.
>
>We acknowledge that [5] and [6] provide effective ways to preserve topology information. However, directly calculating the distance between PDs is also a stable, expressive, and practical way to assess the similarity of two graphs, as proved by [4]. Additionally, vectorizing PDs is a common practice [1, 3, 7], which we adopt for its efficiency and flexibility. Autoencoders emphasize upstream tasks and therefore have higher demands for data loss, while pooling focuses on downstream tasks and has no such requirement.
W3: The paper only considers TU Datasets, ZINC and OGB, which have become mainstream in recent evaluations, should also be included in this evaluation.
>We have already considered one OGB dataset MOLHIV, as shown in the last column of Table 2. To address your concern, we provide additional experimental results on the ZINC dataset (see Table S1) in our response to Reviewer fBuZ.
>[1] Topological graph neural networks. ICLR 2022.
[2] Graph filtration learning. ICML 2020.
[3] Deep learning with topological signatures. NIPS 2017.
[4] Curvature filtrations for graph generative model evaluation. NIPS 2023.
[5] Topological autoencoders. ICML 2020.
[6] Optimizing persistent homology based functions. ICML 2021.
[7] Persistence enhanced graph neural network. AISTATS 2020.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification.
Regarding the first point: the authors argue that this is not a problem because the method is generally known to be good, but this does not seem to be an appropriate response, as even if it is generally good, it does not necessarily mean that it is good with regard to the subject in question.
I assume that only insufficient information on the timing of cycle generation for 1-dim remains. However, after reading the comments, it occurred to me that they might be using an extended diagram that combines 0-dim and 1-dim. If so, it is enough to check that the text does not cause a misunderstanding, because perhaps a misunderstanding has just occurred.
On the second point: this question concerns novelty. Assuming we have recognised that each is appropriate, we would like to clarify which parts are major novel contributions. The main novelty is the concept of the topology preservation of the proposal, and each tool is existing, but is the contribution of the specific construction of an effective combination among them, or does each tool also have its own challenges and is there novelty in those areas as well? This is important for determining the level of novelty and at present we recognise it as the former. In your opinion, what are your views on this?
---
Reply to Comment 1.1.1:
Title: Thanks to Reviewer HRv2 and some extra clarification
Comment: Thank you for your insightful feedback. We are happy to address your concerns and questions. Detailed responses to your comments are provided below.
Q1:
> We apologize for any previous confusion. Initially, we have an observation in Figure 1(a) that PH and graph pooling both seek to coarsen a given graph in a hierarchical fashion, which motivates us to conduct **a preliminary experiment**, which uses a pioneer graph pooling method to perform graph classification while simultaneously computing the persistence of the coarsened graph. Interestingly, **we observed a monotonic trend between the pooling ratio and non-zero 1-dimensional persistence is commonly shared by a wide range of graph data**, as evidenced in Figure 1(b). We emphasize that we are the first to report these meaningful findings. This underscores the validity of using PH in graph pooling due to their shared structural patterns, thus motivating us to integrate topological features into graph pooling. Moreover, extensive experiments (see Table 2) also demonstrate that 1-dimensional topological features are valid in boosting three classic dense pooling methods.
>
>We acknowledge that we extended persistence diagrams to increase expressivity, by storing node-related information in self-loops. We apologize for any unclear descriptions. To avoid ambiguity, we will make the following major modifications to our revised manuscript:
>1. In Section 4.2, we will add preliminary experiments in Table S3 (in the attached PDF in our General Response) to explain why we do not directly incorporate ordinary 0-dimensional features. Subsequently, the self-loop augmented 1-dimensional persistence diagram is denoted as $\tilde{\mathcal{D}}_1$.
2. In Section 4.3, we will revise our claim to state "self-loop augmented 1-dimensional topological features computed by PH are sufficient enough to be more expressive than 1-WL."
Q2:
>Thank you for raising this question. Our novelty and contribution encompass the following 3 aspects, extending beyond merely combining existing tools effectively:
>1. **Findings**. We are the first to have the findings that PH and graph pooling naturally aligns well across multiple datasets (see Figure 1(b)), which motivates us to use PH to boost graph pooling, and we believe, may be advisable for related methodical designs.
2. **Methodology**. Guided by our findings, we developed three specific modules in our methodology:
> - Given that coarsened graphs in dense graph pooling are always fully connected (see Figures 3, 4) with widely ranging edge weights (see Appendix D for empirical evidence), we designed a differentiable **Resampling** process to make the coarsened graphs well suited for using learnable filtrations. This paves the way for the following designed modules in effectively injecting topological information into graph pooling.
> - Utilizing the resampled graphs from last step, we designed a **Persistence Injection** module, capitalizing on the alignment between persistence and graph pooling.
> - We designed a **Topological Loss Function**. This can be viewed as a neural surrogate to regularize that topological structures should be preserved in graph pooling.
Experimental results in Appendix E.5 demonstrate that without our Resampling process, directly combining learnable filtrations with graph pooling may hinder model performance in some cases. Persistence Injection and Topological Loss also proved effective for pooling. Hence, from a methodological standpoint, we contribute a straightforward yet effective way to integrate PH with graph pooling.
3. **Theoretical Analysis**. We demonstrate that self-loop augmented **1-dimensional** persistence diagrams can boost the expressive power of graph pooling methods.
We acknowledge that our previous clarification was somewhat weak. In our revised manuscript, we will modify our summary of contributions in the Introduction and reorganize Section 4.2 for improved comprehension of our contribution.
>
>[1] Hofer, Christoph, et al. "Graph filtration learning." International Conference on Machine Learning. PMLR, 2020. | Summary: This paper proposes a topology-based graph pooling layer, TIP. TIP fits easily with the current graph pooling frameworks. Once the pooled graph is obtained from any existing graph pooling techniques, the authors make this pooled graph adaptable to persistent homology. They consider the PH of this graph and optimize it by minimizing the topological loss function between the current persistence diagram and original persistence diagram. The authors show experimental results on some synthetic and real-world datasets.
Strengths: 1. It is, indeed, a new idea to combine PH for graph pooling to maintain topological information in the graph during pooling.
2. Experiment to show that the proposed method preserves important topological features is well-chosen.
3. The paper provides experimental evidence that addition of TIP into a standard GNN framework improves the performance on most datasets for graph classification.
Weaknesses: 1. Theorem 4.1 is incorrect as stated. Only the 1-dimensional topological features computed by PH cannot be as expressive as 1-WL test in distinguishing non-isomorphic graphs. Consider two graphs with no loops, with different numbers of vertices. 1-WL test will be able to distinguish these graphs while 1-dimensional topological features computed by PH will not be able to. One would need to use the 0-dimensional topological features computed by PH. Refer to [1]. Adding self-loops is just a proxy for counting the number of vertices, which can be done using 0-dimensional features.
2. Moreover, the authors, in [1], have already shown that PH is at least as expressive as 1-WL test. Hence, the theoretical contribution of the paper does not seem significant.
[1]: Topological Graph Neural Networks, Horn et.al
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. What happens when you consider graphs with more than one component? How does TIP perform in that scenario compared to other pooling mechanisms?
2. Have you tried other ways to incorporate $L_{topo}$? For e.g., by choosing a different vectorization of PDs such as persistence images or rational hats?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes, the authors have discussed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1: Theorem 4.1 is incorrect as stated. Only the 1-dimensional topological features computed by PH cannot be as expressive as 1-WL test in distinguishing non-isomorphic graphs. Consider two graphs with no loops, with different numbers of vertices. 1-WL test will be able to distinguish these graphs while 1-dimensional topological features computed by PH will not be able to. One would need to use the 0-dimensional topological features computed by PH. Refer to [1]. Adding self-loops is just a proxy for counting the number of vertices, which can be done using 0-dimensional features.
>As stated in our proof of Theorem 4.1, adding self-loops results in additional cycles, which are reflected in the diagonal of one-dimensional persistence diagrams (PDs). Consequently, the augmented one-dimensional PDs can distinguish graphs with different numbers of nodes, as they correspond to different numbers of points in the one-dimensional PDs. Therefore, this distinction can be achieved using 0-dimensional or 1-dimensional features. Our method can be easily extended to integrate both 0-dimensional and 1-dimensional features, which we denote as TIP-0. Preliminary experiments, shown in Table S3, indicate that the inclusion of zero-dimensional topological features merely increases runtime. Thus, our proposed method focuses solely on 1-dimensional PDs.
W2: Moreover, the authors, in [1], have already shown that PH is at least as expressive as 1-WL test. Hence, the theoretical contribution of the paper does not seem significant.
>In our response to reviewer fBuZ, we have stated that our theoretical novelty lies in the further proof that PH with **1-dimensional** features is also as expressive as WL, while [1] only proved **0-dimensional** case. Moreover, we theoretically proved the **isomorphic invariant property** of our method.
Q1: What happens when you consider graphs with more than one component? How does TIP perform in that scenario compared to other pooling mechanisms?
>To address your concern, we generated a synthetic dataset named 2-Cycles, comprising two balanced two-class sets of 1,000 graphs each. This dataset consists of either two disconnected large cycles (class 0) or two large cycles connected by a single edge (class 1). The distinguishing factor between the classes is the number of connected components. The node numbers range from 10 to 20, with three-dimensional random node features generated. For the model configuration, we uniformly employed one GCN layer plus one pooling layer. The evaluation criteria remained consistent with those outlined in our paper. Experimental results in Table S4 indicate that our method is not effective in distinguishing similar graphs with different connected components. This aligns with our expectations, as our method does not explicitly incorporate such information, given that most graphs in real-world datasets are connected.
Q2: Have you tried other ways to incorporate 𝐿𝑡𝑜𝑝𝑜? For e.g., by choosing a different vectorization of PDs such as persistence images or rational hats?
>Thanks for the valuable comment. There are many topological descriptors, and we simply follow previous works [1, 2] and tried some of them. To make the best of PDs, we use several transformations and concatenate the output of them, including triangle point transformation, Gaussian point transformation and line point transformation. We have not tried rational hats and are grateful for reviewer 3dRY's suggestion. However, concerning the time budget of rebuttal, we would leave this as future work.
>[1] Graph filtration learning. ICML 2020.
[2] Topological graph neural networks. ICLR 2022.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their efforts.
However, I am not convinced by Theorem 4.1. Adding self-loops seems forced, just to fix the case of different number of nodes.
Moreover, in the proof of Proposition 1, graphs can be isomorphic with the node features being different, right? The filtration function is S_n-equivariant. But how does that guarantee that the summation of node features on two different graphs is equal to, to begin with?
I appreciate the efforts put into the experiment about different number of connected components.
Hence, as of now, I would like to stick to my score.
---
Reply to Comment 1.1.1:
Title: Thanks to Reviewer 3dRY and some extra clarification
Comment: Q: However, I am not convinced by Theorem 4.1. Adding self-loops seems forced, just to fix the case of different number of nodes.
> In our proof of Theorem 4.1, we categorize the cases into 3 categories: (1)$u$ and $u'$ are both in cycles; (2) $u$ and $u'$ are both not in cycles; (3) one of $u$ and $u'$ is in cycles and the other is not. Adding self-loops as augmentation is used to handle case (2). Theoretically, **this simple yet effective augmentation eliminates the necessity of computing 0-dimensional topological features, thus reducing computational burdens**. Furthermore, empirical results (Table S3 in the attached PDF in our General Response) showed that **using 0-dim topological features merely increases runtime**, so our proposed method focuses only on 1-dim persistence diagrams.
Q: In the proof of Proposition 1, graphs can be isomorphic with the node features being different, right?
> We apologize for any ambiguity caused by our unclear descriptions. In the context of graph isomorphism, node features refer to node labels (e.g. node colors or degrees). Therefore, for isomorphic graphs with node features (labels) $X$ and $Y$, there exists a permutation $P$ such that $X = P(Y)$. Under this setting, the sum of node features is equal. If graphs have totally different node features, they cannot be considered as isomorphic graphs.
In our revised manuscript, we will **replace "node features" with "node labels"**, and provide detailed descriptions of the graph isomorphim settings.
>Thank you for your feedback and for pointing out these concerns. We apologize for any confusion the results may have caused. After carefully reviewing the issues you highlighted, we will make the necessary revisions. Your insights are valuable to us, and we are committed to addressing them thoroughly in our revised submission.
---
Reply to Comment 1.1.2:
Title: Thanks to Reviewer 3dRY and some extra clarification
Comment: Thank you for your continued interest in our paper. We noticed that you have concerns regarding the proof of Proposition 1. We apologize for the lack of clarity in our previous proof. In our latest response to Reviewer TU8V, we have provided a more general and formal proof, which you might find interesting. | Summary: This paper proposes a new and systematic way to integrate persistent homology into GNNs for improved performance. Rather than applying PH in a brute force manner as much existing work does, this work adapts the method based on the coincidence between the graph pooling mechanism and the filtration of PH. The work takes advantage of the dynamic nature of PH via the filtration and results in improved performance in message passing.
Strengths: This was a very nice paper that studies an important problem and proposes a novel approach using persistent homology. In my mind, this is perhaps among the first most convincing applications and demonstrations of the relevance of persistent homology to deep learning theory and implementations.
Weaknesses: The writing style of the paper could be improved, benefitting from a spell and linguistics check.
Technical Quality: 3
Clarity: 2
Questions for Authors: GNNs are known to encompass other NN models and architectures, such as CNNs and transformers. Is the approach adaptable to these special cases of GNNs? Does the performance also hold up in machine learning tasks, such as image classification? Would the theoretical guarantees also need to be adapted?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations were considered in the check list, however, I would like to see these concerns integrated into the paper. In addition, it would be helpful to have examples and demonstrations of the limitations of the method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Questions: Is the approach adaptable to these special cases of GNNs? Does the performance also hold up in machine learning tasks, such as image classification? Would the theoretical guarantees also need to be adapted?
>The proposed approach is adaptable as long as the data can be represented as graphs with specific topological structures. There is no need to modify the theoretical guarantees.
Limitations: I would like to see these concerns integrated into the paper. In addition, it would be helpful to have examples and demonstrations of the limitations of the method.
>Thanks for your valueable comment. In our revised manuscript, we will relocate the discussion of limitations to Section 6, Conclusion. The limitations are evident: tree-like graphs have no cycles, rendering the one-dimensional persistence diagram meaningless. Additionally, in our response to Reviewer 3dRY, we provide supplementary experiments on synthetic datasets, demonstrating that one limitation of our method is its inability to distinguish between graphs when the number of connected components is the only differentiating factor.
---
Rebuttal Comment 1.1:
Title: Acknowledgment of rebuttal
Comment: I have read reports by other reviewers and the authors' responses, as well as seen the additional experiments and proposed improvements. I maintain my opinion that this a strong paper that I would like to see accepted in the conference and therefore am maintaining my positive rating.
---
Reply to Comment 1.1.1:
Title: Thanks to Reviewer k5BH
Comment: Thank you for taking the time to carefully review our additional experiments and responses. We sincerely appreciate your continued positive assessment and your support for our work. Your thoughtful feedback has been invaluable in helping us refine our paper, and we are grateful for your recommendation. | Rebuttal 1:
Rebuttal: **General Response:**
We would like to express our sincere gratitude for your thorough review of our manuscript and for providing valuable feedback and suggestions. Your expertise and insights have been instrumental in improving the quality and clarity of our work.
During the rebuttal period, __we provide additional experimental results (Tables) and revised figure in a PDF file__. All experimental codes will be updated and released accordingly. Specifically:
1. **Theory.** Reviewer fBuZ, 3dRY, and TU8V raised concerns about our theorem. We would like to emphasize that the innovation of our theory lies in further proving that the expressive power of one-dimensional topological features is stronger than that of the Weisfeiler-Lehman (WL) test. To clarify our proof of theorem, we will reorganize and explain it step by step.
2. **Motivations.** Reviewer fBuZ is concerned about our motivations for preserving cycles in pooling. Meanwhile, Reviewer HRv2 provides literature evidence supporting the preservation of topological features. Our motivation arises from the observation that persistent homology (PH) and graph pooling naturally align well. This motivates us to preserve topological features in the pooling process. Preliminary experiments (shown in Table S3) indicated that using zero-dimensional topological features merely increases runtime; hence, our method focuses on one-dimensional persistence diagrams, which is explained as preserving cycles.
3. **Experiments.** Both Reviewers fBuZ and HRv2 suggested adding experiments on the ZINC dataset. We have provided results in Table S1, where a consistent improvement is observed when integrating our method. To address the concerns of Reviewer fBuZ and TU8V about using fixed filtration functions, we use an MLP with randomly initialized and fixed parameters as the filtration function, as suggested by Reviewer fBuZ. Experimental results in Table S2 demonstrate that it is more effective to use learnable filtrations rather than fixed ones.
4. **Limitations.** To address Reviewer 3dRY's question about our method's performance on graphs with more than one component, we generated a synthetic dataset named 2-Cycles. As most graphs contain only one connected component and we do not explicitly incorporate such information, our method does not show significant improvements for multi-component graphs, as evidenced in Table S4. This can be considered a limitation of our method, which we also discussed in response to Reviewer TU8V.
We sincerely appreciate the time and effort you have devoted to reviewing our manuscript and providing constructive feedback. Your contributions have significantly strengthened our research.
## **PDF**
Pdf: /pdf/86458a4091d432bdde61cbf276fbab29ac59f4e7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper introduces TIP (topology-invariant pooling), a PH-based pooling layer. The proposed approach involves resampling graph connections from soft-cluster assignment matrices and adjusting edge weights using persistence information derived from 1-dimensional diagrams. TIP leverages a loss function designed to maintain topology by leveraging vectorizations of the 1-dimensional diagrams. Experimental evaluations conducted on synthetic and real classification datasets illustrate the performance of the proposed method.
Strengths: - Flexibility: TIP is agnostic to the GNN method and can be easily combined with GNN layers
- Novelty: This combination of PH for graph pooling tasks is novel.
Weaknesses: - The theoretical part of the paper does not seem very relevant and novel. Theorem 4.1 only states the **existence** of a filtration that is at least as expressive as the 1-WL test, which has limited impact in practice. Similar results are well-known in the literature (e.g., [1]). Also, in the analysis, we must allow filtration functions f to leverage graph structure, not only node features as defined in the paper. Finally, it seems that Theorem 4.1 can be easily turned into a strictly more expressive statement (see question below).
- Another main concern revolves around the motivation for preserving cycles in general-purpose graph coarsening methods. Overall, I found the discussion/motivation for this design choice hand-wavy.
- While the proposed method improves over the base pooling layers (e.g., DiffPool), it does not lead to significant gains over some (pooling-free) models in Table 2. Also, the experiments include only 1 OGB dataset --- I recommend adding ZINC or at least another OGB dataset.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. For the non-zero persistence tuples, are the death times always equal $\infty$ (or f_max + constant)? If so, isn't TIP exploiting only (non-persistent) homology?
2. Figure 2 does not show self-loops after the resampling procedure.
3. Including the definition of simplicial complexes (SCs) would be helpful. Are graphs (with self-loops) 1-dimensional SCs? Do self-loops count as independent cycles?
4. How can the hierarchical view of PH in Figure 1 be obtained from sublevel filtrations as defined in Section 3? In other words, how can we achieve a decreasing sequence of subgraphs using sublevel filtrations?
5. Where does the proposed method's expressivity stand compared to other graph pooling layers (see [2])? The Appendix provides a brief discussion. I think that discussion could be added to the main paper.
6. Consider the same setting as Theorem 4. Isn't PH based on $D_1$ **strictly** more expressive than 1-WL? To prove this additional gain, it suffices to show a pair of graphs $G, G'$ with $D_1(G) \neq D_1(G')$ that 1-WL cannot distinguish. Wouldn't G=two triangles and G'=hexagon (where all nodes have the same features) be an example of such a pair?
7. How does the proposed model perform using a fixed (non-trainable) random filtration function (e.g., an MLP where the parameters are fixed)?
[1] - Topological GNNs, ICLR 2022.
[2] - The expressive power of pooling in Graph Neural Networks, NeurIPS 2023.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The paper mentions as limitation the "heavy reliance of the proposed method on circular structures within graphs" in Appendix F.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1:Concerns about theory and filtrations.
>The theoretical analysis of the expressive power and other properties of graph pooling is crucial in this field, as it aids in selecting between existing pooling operators or developing new ones [1]. Beyond TOGL [2] that proves the expressivity of PH using **0-dim** features, our theoretical novelty lies in the further proof that PH with **1-dim** features is also as expressive as WL. Additionally, in Appendix E.6, empirical results demonstrate that our proposed method can be more effective in distinguishing non-isomorphic graphs, indicating that our model can learn to approximate these filtrations in practice.
>
>Furthermore, we apply the filtration function f to the hidden representations $\mathbf{X}^{(l)}$ obtained through GNNs (see Eq. (4) and (6)), where graph structures are considered. This type of operation is also found in relevant literature [2, 5].
W2: Motivation for preserving cycles
>In graph pooling, the notion of a good/useful latent graph topology is less obvious. Many efforts have been devoted to learning representations with additional regularizers, among which PH is frequently used for data with topological structures. Since preliminary experiments (in Table S3) showed that using 0-dim topological features merely increases runtime, our proposed method focuses only on 1-dim persistence diagrams, which is explained as preserving cycles in graph coarsening. This claim is also confirmed by Reviewer HRv2 with literature evidence [3, 4].
W3: No significant gains over some pooling-free models. Experiments on ZINC.
>Our primary objective is to devise a novel mechanism to **boost** graph pooling methods rather than proposing a new method to surpass existing GNN models. Our performance depends on how the original pooling methods perform. Experimental results demonstrate that the proposed method achieves substantial performance improvement when applied to several pooling methods. Additionally, our method outperforms strong pooling-free baselines such as GSN in all but one case.
>
>Furthermore, we conducted an additional experiment on ZINC dataset. Following the settings in [7], we used the graph-level prediction task, and the results of MAE are shown in Table S1. Our models consistently outperform three dense pooling methods, demonstrating their ability to combine the benefits of both graph pooling and persistent homology.
Q1: For the non-zero persistence tuples, are death times always equal
>It is a well-established practice to extend persistence diagrams to mitigate numerical issues, as demonstrated in previous studies [2, 6]. The persistence tuples with non-zero values have identical death times but distinct birth times determined by their filtration values, resulting in varying persistences. Consequently, TIP leverages persistent homology to enhance graph pooling methods.
Q2: Fig. 2 does not show self-loops
>We have revised Fig. 2. The new version is provided in PDF.
Q3: Definition of simplicial complexes.
>A brief definition of simplicial complexes has already been discussed in Sec 3, lines 107-112. Graphs with self-loops can be considered low-dimensional simplicial complexes containing only 0-simplices (vertices) and 1-simplices (edges). Self-loops are counted as cycles.
Q4: The hierarchical view of PH in Figure 1
>In persistent homology, given a filtration function $f$, we can get a finite set of values $a_n > \cdots > a_1$ and generate a sequence of nested subgraphs of the form $\mathcal{G} = \mathcal{G}_n \supseteq \ldots \mathcal{G}_k \ldots \supseteq \mathcal{G}_0 \supseteq \emptyset$, where $\mathcal{G}_k = (V_k, E_k)$ is a subgraph of $\mathcal{G}$ with $V_k:=$ { $v \in V \mid f(\mathbf{x}_v) \leq a_k$} and $E_k:=${$(v, w) \in E \mid \max (f(x_v), f(x_w)) \leq a_k$}. This process is interpreted as the hierarchical view of PH in Fig. 1. As the value decreases, a decreasing sequence of subgraphs is obtained.
Q5: Expressivity issues
>A theoretical discussion is provided in Sec 4.3, lines 209-213, asserting that the proposed method is more expressive than dense pooling methods, which are discussed in this paper. Additional empirical evaluation of expressive power is presented in Appendix E.6. In our revised manuscript, the empirical conclusions will be integrated into the main paper in Section 4.3.
Q6: Isn't PH based on 𝐷1 strictly more expressive than 1-WL?
>Our final conclusion aligns with your statement. The proof consists of two stages: First, we prove that the expressive power of PH is at least as strong as WL, which is Theorem 4.1; Second, we provide specific examples, as you mentioned, where PH exhibits greater expressive power than WL. Thus, the final conclusion is that PH's expressive power surpasses that of WL. In the discussion following Theorem 4.1 and in Appendix C, we present statements similar to yours. We apologize for any ambiguity in our previous statements. In our revised manuscript, we will integrate the two stages into Theorem 4.1.
Q7: What if using a fixed (non-trainable) random filtration function?
>Thank you for the valuable comment, which is instrumental in demonstrating the effectiveness of using learnable filtrations in our method. Following your suggestions, we employ an MLP with randomly initialized and fixed parameters as the filtration function, named as **TIP-F**. Additional experiments are conducted on four benchmark datasets, as shown in Table S2. Overall, learnable filtrations outperform TIP-F, although in some cases, TIP-F achieves similar performance.
>[1] The expressive power of pooling in graph neural networks. NIPS 2023.
[2] Topological graph neural networks. ICLR 2022.
[3] Topological autoencoders. ICML 2020.
[4] Connectivity-optimized representation learning via persistent homology. ICML 2019.
[5] Graph filtration learning. ICML 2020.
[6] Deep learning with topological signatures. NIPS 2017.
[7] Rethinking pooling in graph neural networks. NIPS 2020.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their efforts to address my concerns, especially for the additional experiments.
However, most of my concerns remain. More details here:
> Beyond TOGL [2] that proves the expressivity of PH using 0-dim features, our theoretical novelty lies in the further proof that PH with 1-dim features is also as expressive as WL.
PH with 1-dim features is as expressive as 1-WL because the incorporation of self-loops corresponds to adding the birth time of the 0-dim diagrams to the 1-dim diagrams --- information from the birth times (1-WL colors) suffices to be as expressive as 1-WL. Also, note that the setting in TOGL differs from the self-looped case in this paper. Therefore, I find it misleading to say that the theoretical results of this paper go beyond what is in [2]. I think the theoretical results are rather trivial.
> we apply the filtration function f to the hidden representations $\mathbf{X}^{(l)}$ obtained through GNNs (see Eq. (4) and (6)), where graph structures are considered.
The domain of the filtration function is not consistently used throughout the paper. In line 114, f is a function on nodes + edges. In line 120, f is a function on the initial features of the vertices. In the proofs, f leverages the 1-WL colors. Thus, to be exact, the filtration f should take the graph structure as input, not only the initial features (as in line 120). This is what I meant by my comment.
> The persistence tuples with non-zero values have identical death times but distinct birth times determined by their filtration values, resulting in varying persistences. Consequently, TIP leverages persistent homology to enhance graph pooling methods.
I disagree. If all non-zero persistent tuples have death times equal to infinity, then they are all essential features that are captured by homology (Betti 1) + the values of the filtering functions. To capture the persistence of independent cycles, people have applied Extended PH [1].
[1] PersLay: A Neural Network Layer for Persistence Diagrams and New Graph Topological Signatures, AISTATS 2020.
> A brief definition of simplicial complexes has already been discussed in Sec 3, lines 107-112
I don't think lines 107-112 provide a proper definition.
> we can get a finite set of values $a_n > \cdots > a_1$ and generate a sequence of nested subgraphs of the form $\mathcal{G} = \mathcal{G}_n \supseteq \ldots \mathcal{G}_k \ldots \supseteq \mathcal{G}_0 \supseteq \emptyset$, where $\mathcal{G}_k = (V_k, E_k)$ is a subgraph of $\mathcal{G}$ with $V_k:=$ { $v \in V \mid f(\mathbf{x}_v) \leq a_k$} and $E_k:=${$(v, w) \in E \mid \max (f(x_v), f(x_w)) \leq a_k$}. As the value decreases, a decreasing sequence of subgraphs is obtained.
"As the value decreases" --- what does this mean when we have that $a_1 < a_2 < ... < a_k ... < a_n$? Isn't $k$ the index for the sequence? Also, in section 3, the paper considers the pre-image of [-\infty, a]. I still believe there is a clear mismatch between the motivation in Figure 1 and the idea of sub-level filtrations used in the paper.
Finally, the performance gains over random filtering functions are mostly marginal.
Therefore, I would like to keep my initial rating.
---
Reply to Comment 1.1.1:
Title: Thanks to Reviewer fBuZ and some extra clarification
Comment: Thank you for your insightful feedback. We are happy to address your concerns and questions. Detailed responses to your comments are provided below.
> Q1
We apologize for any unclear clarification. We acknowledge that it's inappropriate to state that our theoretical result goes beyond [1]. Rather, we claim that it is an extension of the theoretical results in TOGL with extra **practical** meanings. The novelty of our theoretical result lies in that by augmenting the 1-dimensional persistence diagrams with self-loops, **the necessity of explicitly computing 0-dimensional persistence diagrams is eliminated, thus reducing computational burdens**. This strongly supports our algorithm design and the novelty is also confirmed by Reviewer 3dRY. We will clarify this point in the revised version according to your suggestion.
> Q2
We apologize for any unclear descriptions. We acknowledge that we have a slightly abuse of notations. In line 114, we want to express that generally filtration functions can be either node-based or edge-based. As the adopted filtrations are node based, so in the rest of this paper we mainly use the node-based filtration functions.
In line 120 and the rest of the paper, $f(x_v)$ does not imply that filtration functions are used on the initial node features, but to express that it is a node-based filtraiton. In our previous response, we explained that the filtration function is applied on the hidden repressentations obtained through GNNs. To avoid ambiguity, in our revised manuscript, we will add another variable $h_v$ as the hidden representation and replace $f(x_v)$ with $f(h_v)$ to distinguish it from the initial node feature $x_v$.
> Q3
We employed ordinary PH mainly because it has been proven to be effective in capturing cycle-related topological information [1, 2], and main focus of this paper is to design a framework to boost graph pooling with PH rather than extend existing filtration learning methods. As claimed in [1], it is also possible to use extended persistence, but extended PH may cause additional computational burden in our setting. Considering all these factors, extended PH was not adopted in our paper. Indeed, as pointed by the reviewer, incorporating extended PH may hopefully bring about additional benefit, thus we would investigate this in our future work.
> Q4
We apologize for our previous clarification. The definition of simplicial complex is essential in the context of TDA, and we provide formal definitions here, which will be added to our revised paper:
**Definition 1 (Simplicial Complex)**: A simplicial complex $K$ consists of a set of simplices of certain dimensions. Each simplex $\sigma \in K$ has a set of faces, and each face $\tau \in \sigma$ has to satisfy $\tau \in K$. An element $\sigma \in K$ with $∣\sigma∣=k+1$ is called a $k$-simplex, which we denote by writing $\mathrm{dim} \sigma = k$. Furthermore, if $k$ is maximal among all simplices in $K$, then $K$ is referred to as a $k$-dimensional simplicial complex. A graph can be seen as a low-dimensional simplicial complex that only contains 0-simplices (vertices) and 1-simplices (edges).
> Q5
We apologize for our previous clarification that causes misunderstanding. The ordered filtration values $a_1 < \cdots < a_n$ in Section 3 is not directly related to Figure 1(a). In Figure 1(a), we are actually considering **persistence** (lifespan of each tuple, see Figure 1(c)) in PH rather than the PH process itself. As high persistence corresponds to features and low persistence is typically considered noise, when we gradually filter out edges with low persistence, a hierarchical view of subgraphs is obtained. This process shares a similar hierarchical fashion with graph pooling, where the filtering of low persistence can be viewed as dropping unimportant edges in graph pooling, as illustrated in Figure 1(a). In our methodology, the _Persistence Injection_ module implements the low persistence filtering operation, in line with our motivation.
In our revised manuscript, we will modify Figure 1(a) by replacing PH with **persistence filtering** to prevent ambigurity. Moreover, we will provide illustrative examples for a better understanding of the persistence filtering process.
> The performance gains over random filtering functions are mostly marginal.
Using learnable filtrations leads to significant gains over random filtration functions in more than half of the cases. In some cases, randomly initialized filtrations may happen to be close to the learned filtrations, but this does not consistently occur. Similar findings have been made in graph pooling that **randomly pooled subgraphs** also leads to good performance compared to **learnable subgraphs** [3]. We thank Reviewer fBuZ for proposing the implementation of random filtrations that promotes the new finding, as this area remains unexplored. This is out of the scope of this paper, and we plan to explore this finding in our future work.
---
Reply to Comment 1.1.2:
Title: Thanks to Reviewer fBuZ and some extra clarification
Comment: Thank you for your feedback and for bringing these concerns to our attention. We apologize for any confusion our previous clarification may have caused. We have carefully reviewed the issues you've highlighted and will make the necessary revisions. Your insights are important to us, and we will work diligently to address them in our revised submission.
[1] Topological graph neural networks. ICLR 2022.
[2] Graph filtration learning. ICML 2020.
[3] Rethinking pooling in graph neural networks. NIPS 2020. | null | null | null | null | null | null |
Spiking Transformer with Experts Mixture | Accept (poster) | Summary: The manuscript presents an innovative approach to integrating SNNs and MoE methodologies into a cohesive framework. It introduces the Spiking Experts Mixture Mechanism (SEMM), which leverages the sparse spiking activations of SNNs and the conditional computation of MoE. The proposed SEMM has been adapted into the Spiking Transformer architecture, resulting in two novel structures: the Experts Mixture Spiking Attention (EMSA) and the Experts Mixture Spiking Perceptron (EMSP). Experimental results are provided, showing SEMM's effectiveness in achieving stable improvements on neuromorphic and static datasets with manageable increases in computational overhead.
Strengths: 1. Novel mechanisms. The introduction of SEMM, along with its derivatives EMSA and EMSP, is a novel contribution, particularly in how these mechanisms manage to perform conditional computation dynamically.
2. The improvement of performance brought by SEMM is significant. The EMSA and EMSP modules can be adapted to any spiking self-attention and MLP modules, which may be beneficial to broader SNN research community.
3. Experiments are comprehensive. Especially, Fig. 6 shows the average spiking rate of spatial-temporal locations of routers in different kinds of images, which proves the effectiveness of spiking routing functions.
Weaknesses: 1. Several Unclear Notations.
(1) In Fig.1 (b) and Line 132, the authors use the notation of $N$, but what does the $N$ denote? I cannot find explanations on this issue. Does $N$ denote the length of image patches?
(2) In Fig.2 (c), why does "DWL" stand for Depth-Wise Conv? I think it would be better to denote it as "DWC" or just explain it as Depth-Wise Linear.
(3) In Equation 24 and 26, what’s the meaning of $p$ and $q$?
Please clarify these notations.
2. The Section 4.3 is more like a sensitivity experiment of hyper-parameters rather than an ablation study. The real ablation study has been shown in Fig. 4. Please correct this issue to avoid misleading readers.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can you discuss the feasibility of adapting your proposed EMSA and EMSP modules to neuromorphic hardware? Are there any specific types of neuromorphic hardware or setups that are particularly well-suited or ill-suited for deploying SEMM?
2. How does SEMM handle the trade-off between sparsity and accuracy?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: While the authors indicate that the limitations are addressed in Section 5 (refer to Line 533 in the Checklist), it appears that this section does not currently include a discussion on the limitations.
Can you include a discussion on the limitations of the work?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions for improvement.
> ***Weakness 1**: Several Unclear Notations.*
**AW1**: Sorry for the confusion caused by unclear notations. To clarify: __(1)__ $N$ denotes the length of image patches. __(2)__ Thanks for your advice. We will change the abbreviation representing the depth-wise convolutional layer from "DWL" to "DWC". __(3)__ The input current matrix (feature) $I[t]$ has a dimension of $P\times Q$, where p and q represent the $p$-th row and $q$-th column, respectively. We will modify them in the manuscript.
> ***Weakness 2**: The Section 4.3 is more like a sensitivity experiment of hyper-parameters rather than an ablation study.*
**AW2**: We appreciate your suggestion and will revise the titles of the ablation study and the hyperparameter sensitivity experiment in the manuscript accordingly.
> ***Question 1**: Can you discuss the feasibility of adapting your proposed EMSA and EMSP modules to neuromorphic hardware?*
**AQ1**:
SEMM can theoretically be deployed on chips that are synchronous within layers and asynchronous between layers, such as TrueNorth [R1]. The spike-driven operation of SEMM is similar to the AND and ADD residuals in SEW ResNet [R2]. Multiple branches and residuals are supported on certain neuromorphic hardware, such as Speck V2 [R3]. However, deploying SEMM on a fully asynchronous neuromorphic chip is challenging.
> ***Question 2**: How does SEMM handle the trade-off between sparsity and accuracy?*
**AQ2**:
In SEMM, we do not control sparsity using hyperparameters; instead, the model dynamically adjusts based on the input. Therefore, there is no trade-off between sparsity and performance. Higher sparsity does not necessarily imply lower accuracy. SEMM utilizes dynamic spiking sparsity to select feature representations within each expert, thereby enhancing accuracy.
> ***Limitation 1**: While the authors indicate that the limitations are addressed in Section 5 (refer to Line 533 in the Checklist), it appears that this section does not currently include a discussion on the limitations. Can you include a discussion on the limitations of the work?*
**AL1**:
We apologize for the confusion caused by our incomplete statement. In Sec. 5, we state that future work will focus on SEMM application on larger SNN models. In line with your suggestion, we further discuss the limitations of this work:
SEMM has been validated on image datasets, demonstrating its effectiveness. However, no further verification has been conducted on other tasks and diverse datasets. Despite validating SEMM's effectiveness
in time series forecasting tasks as suggested by reviewer RYQd, demonstrating its generalizability, there remain numerous tasks yet to be validated. Additionally, the effectiveness of SEMM on larger SNN models has yet to be validated. Future work will address these two limitations.
We will add this discussion to the appendix of the manuscript.
[R1] Akopyan, Filipp, et al. "Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip." IEEE transactions on computer-aided design of integrated circuits and systems 34.10 (2015): 1537-1557.
[R2] Fang, Wei, et al. "Deep residual learning in spiking neural networks." Advances in Neural Information Processing Systems 34 (2021): 21056-21069.
[R3] Ole, Richter, et al. "Speck: A Smart event-based Vision Sensor with a low latency 327K Neuron Convolutional Neuronal Network Processing Pipeline." IEEE International Symposium On Asynchronous Circuits and Systems (2023).
---
Rebuttal Comment 1.1:
Title: Good Rebuttal
Comment: I have thoroughly reviewed the authors' response, as well as their replies to the other reviewers. The authors have addressed my concerns, and I was pleasantly surprised to find that in their response to Reviewer RYQd, they mentioned applying SEMM to Spikformer, which notably improves time-series forecasting performance. This finding left a strong impression on me. In light of this, I have decided to raise my score.
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you again for your valuable time and feedback.
Best regards
Authors | Summary: The authors introduce MOE into SNN, propose the SEMM structure, and use this structure to enhance the attention and MLP modules in the Spikeformer-like architecture. They transform these elements into the EMSA and EMSP modules. The new network architecture achieved better performance under similar parameter settings compared to the original spiking transformer structures.
Strengths: The SEMM proposed by the authors does not introduce floating-point matrix multiplication during the SNN forward process. Its SMSA structure effectively focuses on the relevant parts of the image. As a result, SEMM enhances the performance of various spiking transformers, achieving SOTA performance on different datasets.
Weaknesses: 1. The authors claim that SEMM's advantage lies in its spike-driven characteristics and suitability for asynchronous chips. However, the author did not run the SEMM structure on real asynchronous hardware. In my opinion, due to the spike-driven nature of an asynchronous chip lacking a hardware clock, it is difficult for multiple spike matrices in the router part (in SEMM) to arrive simultaneously and interact, resulting in most outputs being 0. The author needs to provide a more detailed explanation of the spike-driven advantage, especially on asynchronous hardware.
2. Prior to Eqn. 14, the authors omitted the equation for partitioning the input X (how to get the $A_m$?).
3. In Eqn. 17, $BN(SEMM(E, R, F(·)))W_o$ undoubtedly involves floating-point matrix operations; is there a writing error present here?
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. The SEMM(E, R, F(·)) obtained from Eqn. 16 is described as containing only 0 or 1. But it should be a floating-point matrix.
2. Line 217 What means “The ASR of EMSA is around 0.5, which is comparable to the regular TopK setting of ANN-MoE”? What's the relationship between the ASR in EMSA and the TopK rate in ANN-MoE?
3. What is the purpose of Fig. 6? In my view, different inputs leading to different activations are natural.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your detailed comments. We would like to address your concerns below.
> ***Weakness 1**: The author needs to provide a more detailed explanation of the spike-driven advantage, especially on asynchronous hardware.*
**AW1**: As you mentioned, deploying SEMM on a fully asynchronous neuromorphic chip is challenging. However, SEMM can theoretically be deployed on chips that are synchronous within layers and asynchronous between layers, such as TrueNorth [R1]. Specifically, multiple spiking router matrices, i.e. $\{ r_1,r_2,...,r_m\}$, can be implemented by one block (Linear-BatchNorm-SpikingNeuron) and thus can be mapped to the one core of the neuromorphic chip for computation. Outputs from the same core are considered synchronous. The spiking Hadamard product and addition that occur in SEMM are analogous to the AND and ADD operations in SEW ResNet [R2]. These element-wise spiking operations can be adapted on the neuromorphic chip that supports multiple branches and residuals, such as Speck V2 [R3].
> ***Weakness 2**: Prior to Eqn. 14, the authors omitted the equation for partitioning the input X (how to get the $A_m$?).*
**AW2**: Sorry for the confusion caused by ommitation. In Eqn. 14, Each $\bf A_m$ is obtained as follows,
$$
{\bf A\_m} = \text{SSA}(\bf Q\_m,K,V),
$$
where $\bf Q_m$ is individual to each expert, and $\bf K, V$ are shared among experts. We will revise it in the manuscript.
> ***Weakness 3, Question 1**: In Eqn. 17, $BN(SEMM(E, R, F(·)))W_o$ undoubtedly involves floating-point matrix operations; is there a writing error present here? The SEMM(E, R, F(·)) obtained from Eqn. 16 is described as containing only 0 or 1. But it should be a floating-point matrix.*
**AW3**: We apologize for our incorrect expression. The $\text{F}(·)$ in Eqn. 16 involves spiking Hadamard products and element-wise addition. Since both $\bf E$ and $\mathcal{R}$ are in spiking form, the output of ${\rm{SEMM}}({\bf E}, \mathcal{R}, {\text F}(\cdot))$ is an integer spiking form rather than a binary spiking form. $\text{BN}({\rm{SEMM}}({\bf E}, \mathcal{R}, {\text F}(\cdot)))\bf W_o$ in Eqn. 17 can be decomposed into addition operations and multiplication is avoided, which is shown as
$$ {\rm{SEMM}}({\bf E}, \mathcal{R}, {\text F}(\cdot)) = \sum_{i=1}^{m} {\bf r}_i * {{\bf A}_i} = \bf s_1 + s_2 + \cdots + s_m $$
$$ {\rm{SEMM}}({\bf E}, \mathcal{R}, {\text F}(\cdot))\bf W_o = ({\bf s_1 + s_2 + \cdots + s_m})\bf W_o = \bf s_1W_o + \bf s_2W_o + \cdots + \bf{s_m}W_o$$
where $\bf s_m$ is binary spiking form. We omit $\text{BN}$ because its parameters can be integrated with the linear layer. We will revise it in the manuscript.
>**Question 2**: Line 217 What means "The ASR of EMSA is around 0.5, which is comparable to the regular TopK setting of ANN-MoE"? What's the relationship between the ASR in EMSA and the TopK rate in ANN-MoE?
**AQ2**: In ANN-MoE [R4, R5], TopK typically selects 2 experts, which can be considered as 50% sparsity for a total of four experts in our settings. This is similar to the average spiking rate (sparsity) observed in EMSA.
>**Question 3**: What is the purpose of Fig. 6? In my view, different inputs leading to different activations are natural.
**AQ3**: The purpose of Fig. 6 is to further demonstrate the dynamic condition computing characteristics of routers in SEMM. It illustrates the average spiking rate across spatial-temporal locations of routers across different classes of images (with 50 images per class), offering a more representative visualization compared to the single image in Fig. 5. Specifically, the spiking pattern of a router varies with time steps and spatial positions. Across different categories, the active routers differ; for example, in (a), routers 1 and 2 (top left and top right) are active, while in (b), routers 1 and 4 (top left and bottom right) exhibit high activation.
[R1] Akopyan, Filipp, et al. "Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip." IEEE transactions on computer-aided design of integrated circuits and systems 34.10 (2015): 1537-1557.
[R2] Fang, Wei, et al. "Deep residual learning in spiking neural networks." Advances in Neural Information Processing Systems 34 (2021): 21056-21069.
[R3] Ole, Richter, et al. "Speck: A Smart event-based Vision Sensor with a low latency 327K Neuron Convolutional Neuronal Network Processing Pipeline." IEEE International Symposium On Asynchronous Circuits and Systems (2023).
[R4] Noam Shazeer, et al. "Outrageously large neural networks: The sparsely-gated mixtureof-experts layer." International Conference on Learning Representations (2017).
[R5] Yanqi Zhou, et al. "Mixture-of-experts with expert choice routing." Advances in Neural Information Processing Systems (2022).
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. All my concerns have been addressed.
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you for your recognition of this work.
Best regards
Authors | Summary: The paper presents a novel integration of Spiking Neural Networks (SNNs) with Mixture-of-Experts (MoE) to form the Spiking Transformer, introducing the Spiking Experts Mixture Mechanism (SEMM). The SEMM enables dynamic sparse-conditional computation by having both experts and routers output spiking sequences, which aligns with the sparse and energy-efficient nature of SNNs. The proposed model incorporates two key components: Experts Mixture Spiking Attention (EMSA) for head-wise routing and Experts Mixture Spiking Perceptron (EMSP) for channel-wise allocation of spiking experts. Empirical evaluations demonstrate that SEMM improves performance on neuromorphic and static datasets with minimal additional computational overhead compared to traditional SNN-based transformers.
Strengths: 1.Originality: The fusion of SNNs with MoE principles is innovative and provides a fresh perspective on conditional computation.
2.Quality: The paper offers a rigorous mathematical and conceptual framework for SEMM, demonstrating a thorough understanding of both SNNs and MoE.
3.Clarity: While the concepts are complex, the authors have made efforts to explain them coherently, making the paper accessible to a wide audience.
4.Significance: The work has the potential to impact energy-efficient AI, particularly in edge devices where power consumption is a primary concern.
Weaknesses: 1.The paper could benefit from more comprehensive empirical evaluations across diverse datasets to generalize the findings.
2.The lack of theoretical guarantees or analysis regarding the convergence and stability of SEMM leaves room for skepticism about its robustness.
3.A more detailed discussion on computational efficiency and scalability with respect to dataset size and complexity would be beneficial.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1.Can the authors provide more empirical evidence demonstrating the effectiveness of SEMM on larger and more diverse datasets?
2.How does the proposed SEMM adapt to varying levels of sparsity in the input data, and what is the impact on computational efficiency?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: 1.Generalizability: The experiments, though indicative of the SEMM's effectiveness, are limited to a specific subset of datasets. Extending the evaluation to include a wider variety of data, especially those with different characteristics and scales, would strengthen claims about the model's versatility and robustness.
2. Implementation Details for Reproducibility: Although the paper outlines the conceptual framework of SEMM, more explicit details on the implementation and the source code, such as hyperparameter tuning strategies and specifics of the spiking sequences' generation and handling, would facilitate reproducibility and foster further advancements built upon this work.
3. The novelty of the router mechanism for expert mixture is limited for the router mechanism seems have little difference with that in ANN MoE methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. We will explain and discuss your concerns.
> ***Weakness 1, Question 1, Limitation 1**: More experiments across diverse datasets to generalize the findings.*
**A1**: __These datasets in experiments possess distinct characteristics and scales, which verify the effectiveness and generalization capability of SEMM.__ Specifically, the neuromorphic dataset exhibits temporal characteristics and sparse event features, validating SEMM's capability to handle sparse event data. The ImageNet-1K, with its 1,000 classes, verifies SEMM's ability to process complex large-scale data.
__In terms of data scale,__ we apologize for not being able to provide results on larger datasets due to the rebuttal time and resource constraints.
__In terms of data diversity,__ we add the validity and generalization experiments of SEMM on time series forecasting, which is a challenging task and aims to predict future values based on historical observations arranged chronologically. Addressing this task often involves modeling the temporal dynamics, resonating profoundly with the nature of neural coding and SNN. As shown in Tab. R2-1, SEMM demonstrates superior predictive capabilities over the baseline on four datasets, further illustrating its versatility and robustness.
### Table R2-1
| Model |Horizon | Metr-la [R1] | Pems-bay [R1] | Solar [R2] | Electrictiy [R2] |
|---------|:---------:|---------|---------|---------|---------|
| | | $\text R^2$ / RSE | $\text R^2$ / RSE | $\text R^2$ / RSE | $\text R^2$ / RSE |
| Spikformer |6 | 0.713/ 0.565 | 0.773/ 0.514 | 0.929/ 0.272| 0.959/ 0.373 |
| Spikformer+SEMM | 6 | 0.721/ 0.557 | 0.776/ 0.510 | 0.930/ 0.270| 0.964/ 0.338 |
| Spikformer |24 | 0.527/ 0.725 | 0.697/ 0.594 | 0.828/ 0.426| 0.955/ 0.371 |
| Spikformer+SEMM | 24 | 0.534/ 0.719 | 0.707/ 0.584 | 0.835/ 0.423 | 0.968/ 0.363 |
| Spikformer |48| 0.399/ 0.818 | 0.686/ 0.606 | 0.744/ 0.519| 0.955/ 0.379 |
| Spikformer+SEMM | 48| 0.400/ 0.811 | 0.688/ 0.603 | 0.751/ 0.464 |0.968/ 0.320 |
| Spikformer |96 | 0.267/ 0.903 | 0.667/ 0.621 | 0.674/ 0.586| 0.954/ 0.382 |
| Spikformer+SEMM | 96 | 0.279/ 0.895 | 0.673/ 0.618 | 0.675/ 0.581 | 0.961/ 0.381 |
Here, the horizon represents the prediction length. RSE stands for Root Relative Squared Error, where a smaller value indicates a lower error. $\text R^2$ represents the coefficient of determination, with higher values indicating higher prediction accuracy.
> ***Weakness 2**: The lack of theoretical guarantees or analysis regarding the convergence and stability of SEMM leaves room for skepticism about its robustness.*
**AW2**: We regret that we are unable to provide a theoretical analysis of the convergence and robustness of SEMM. To the best of our knowledge, there are currently no works theoretically analyzing the convergence of deep SNNs. __However, to address your concern, we validate SEMM's convergence and robustness through experiments.__
__Convergence: From an experimental perspective, SEMM does not affect the final convergence results of the spiking transformer.__ In Fig. R2-1 and R2-2 of our submitted PDF rebuttal document, we compare the training loss and test accuracy curves between the Spikformer baseline and SEMM. The introduction of SEMM did not affect convergence.
__Robustness: From an experimental perspective, SEMM does not compromise the robustness of the model.__ To show this, we add experiments about input noise on CIFAR100. We add a Gaussian noise with mean 0 and variance $\sigma^2$ on inputs $X$. The results are shown in Tab. R2-2. It can be found that at each noise level, introducing SEMM is still better than the baseline.
### Table R2-2
| Model |$\sigma= 0$ (no noise) |$\sigma= 0.1$|$\sigma= 0.2$|$\sigma= 0.3$|
| -------- |:--------:|:--------:|:--------:|:--------:|
| Spikfromer | 77.86 | 72.62 | 70.37 | 68.39|
| Spikformer with SEMM | 79.04 | 73.47|71.05|69.54|
> ***Weakness 3**: A more detailed discussion on computational efficiency and scalability with respect to dataset size and complexity would be beneficial.*
**AW3**: Thanks for your comments. Following your suggestion, we show that SEMM exhibits higher computational efficiency compared to ANN-MoE paradigm on three diverse datasets, i.e., CIFAR100 (static images), CIFAR10-DVS (neuromorphic images), and Solar (time series data). As shown in Tab.R2-3 (following), the number of parameters (Param) and computational operations (OPs) of SEMM on the three datasets are both lower than that of ANN-MoE, while also achieving the best performance. This demonstrates its computational efficiency and scalability for different datasets.
### Table R2-3
| Model | MoE-Type|CIFAR100 | CIFAR10-DVS | Solar |
| -------- |:--------|:--------:|:--------:|:--------:|
| | | Param(M)/OPs (G)/ Acc (%) |Param(M)/OPs (G)/ Acc (%)|Param(M)/OPs (G)/ $\text R^2$|
| Spikformer |Baseline | 9.32/0.83/77.86 |2.57/1.06/80.90|2.52/0.88/0.828
| Spikformer |SEMM | 8.98/0.94/79.04 |2.57/0.89/82.90|2.52/0.78/0.835
| Spikformer |AM-F | 23.60/1.22/77.35 |5.74/1.45/80.80|5.69/1.16/0.821
| Spikformer |AM-S | 23.60/1.34/76.79 |5.74/1.42/82.90|5.69/1.05/0.819
> ***Question 2**: How does the proposed SEMM adapt to varying levels of sparsity in the input data, and what is the impact on computational efficiency?*
**AQ2**: __Regarding question 2, could you further explain what you mean by data sparsity?__
> ***Limitation 2**: Implementation Details for Reproducibility.*
**AL2**: You can refer to the Appendix. C for the hyperparameters and find the source code of SEMM in the supplementary materials.
[R1] Li, Yaguang, et al. "Diffusion convolutional recurrent neural network: Data-driven traffic forecasting." arXiv preprint arXiv:1707.01926 (2017).
[R2] Lai, Guokun, et al. "Modeling long-and short-term temporal patterns with deep neural networks." The 41st international ACM SIGIR conference on research & development in information retrieval (2018).
---
Rebuttal Comment 1.1:
Title: comments
Comment: Thank you for your response and for supplementing the experiments with the time series dataset. However, there are still some concerns that remain unresolved:
1.The question regarding varying sparsity stems from the statement in your paper that "SEMM has a variable sparsity when dealing with different data" (Column 141, Page 4).
2.In Table R2-3, the parameter count and operations (OPs) do not appear to show significant advantages over the Spikformer baseline. In fact, the OPs of Spikformer SEMM are even higher than those of the Spikformer baseline. This raises doubts about the claimed advantage of "realizes sparse conditional computation" (as stated in the abstract). That is, what is the final contributions or advantages of " sparse conditional computation"? That still is confused.
3.The efficiency of the expert mixture system based on the Spiking Transformer has not been evaluated on larger datasets, which is a limitation that cannot be ignored. Models with expert mixture architectures typically demonstrate superior performance on larger datasets, so a broader evaluation would be necessary to support the high quality publication for this conference.
4.There are also concerns regarding the novelty of the proposed approach. Specifically, the differences between the proposed model and existing ANN MoE methods beyond the LIF neurons are not so strong to support the nolvelty of the proposed model, as mentioned by other reviewers.
---
Rebuttal 2:
Comment: Dear Reviewer,
We hope this message finds you well.
We have provided thorough responses to you and sincerely hope you can look through them and update the scores if your concerns have been resolved. We are also open to further discussion if the concerns have not been fully addressed. Please feel free to let us know if you still have any questions.
Best regards
Authors
---
Rebuttal 3:
Title: Rebuttal by Authors
Comment: Dear Reviewer,
Thank you for your time and effort in reviewing our work. We will explain and discuss your concerns.
> ***Concern 1**: Explain the statement "SEMM has a variable sparsity when dealing with different data".*
**AW1**: Sorry for the confusion caused by the unclear statement. We intend to convey that SEMM exhibits variable sparsity when dealing with different data within the same dataset. As shown in Fig. 5,6, and Tab. 2, for different images, each expert's corresponding router selects a different number and position of tokens, while the overall low firing rate indicates the sparsity.
> ***Concern 2**: Show the final contributions or advantages of "sparse conditional computation".*
**AW2**: Similar to ANN-MoE mechanisms, the sparsity conditional computation characteristic of SEMM serves as the source of the performance improvement and computational efficiency achievement. Our experiments show that SEMM can bring stable performance improvements to SNN Transformers.
In terms of computational efficiency, as shown in Tab.R2-3, SEMM results in a substantial reduction in both parameters and operations (OPs), approaching the level of the baseline when compared to the introduction of ANN-MoE mechanisms.
> ***Concern 3**: Evaluation on larger datasets.*
**AW3**: We apologize for not being able to provide results on larger datasets due to the resource constraints. We will add discussions on this issue in the Limitation.
> ***Concern 4**: Only the LIF neurons are not so strong to support the novelty of the proposed model.*
**AW4**: Unlike ANN-MoE, the spike-form routers and experts in SEMM, as well as their integration process, are all in line with the spike-driven characteristics of SNNs. At the same time, SEMM also achieves a similar effect of sparse conditional computation as ANN-MoE, thereby stably enhancing performance on SNN Transformers. In addition, the lightweight architectural design is also an advantage over ANN-MoE. As shown in various experiments, the number of parameters and the number of operations after introducing SEMM are close to the baselines, far less than that of ANN-MoE introduction.
Best regards
Authors | Summary: This research introduces the Spiking Experts Mixture Mechanism (SEMM), a paradigm that combines Spiking Neural Networks (SNNs) with Mixture-of-Experts (MoE) to enhance the capabilities of Spiking Transformers. The proposed SEMM leverages the energy-efficient, sparse spiking activation characteristic of SNNs, resulting in an SNN-MoE framework that is both computation-efficient and SNN-compatible. By integrating SEMM into existing spiking Transformers, performance improvements are achieved. This work contributes to the exploration of high-performance and high-capacity Spiking Transformers.
Strengths: 1. This study is the first attempt to utilize MoE on Spiking Transformers, leveraging the advantages of SNN and MoE for stable performance gains.
2. The experimental analysis on sparse conditional computation is thorough.
3. The writing is clear and accessible.
Weaknesses: **Major:**
1. As mentioned in the first contribution (Line 68-69), the proposed SEMM is a universal SNN-MoE paradigm. Therefore the application of SEMM on the most state-of-the-art Spiking Transformers such as Spike-driven Transformer V2[R1] is crucial to establish the generalizability of SEMM. However, provided baselines do not include it, which weakens the persuasiveness of its generalizability. Moreover, SEMM-based Spiking Transformers underperform compared to [R1]'s peak results, suggesting MoE might not be essential for boosting performance. I would consider raising my rate if you can provide consistent performance improvements using SEMM on [1].
2. Can you provide evidence or previous studies demonstrating the effectiveness of using DWConv after MLP.fc1? If such evidence exists, it should be cited; otherwise, conducting an ablation study on this layer is recommended. Without this clarification, the motivation behind adding this layer remains unclear, weakening the persuasiveness of SEMM's design applied to MLP.
3. Is the ANN-MoE paradigm derived from prior ANN-MoE studies, or is it an original concept from your research? If it's based on earlier work, citations are necessary; otherwise, you should highlight the unique advantages of your proposed ANN-MoE paradigm compared to existing ones.
**Minor:**
1. The description for Figure 3 is not clear enough. Given the explanation provided, I cannot fully understand the meaning of each line in the diagram.
2. In Figure 4, do the four graphs in each sub-figure relate to four images from the same class?
References:
[R1] Man Yao, JiaKui Hu, Tianxiang Hu, Yifan Xu, Zhaokun Zhou, Yonghong Tian, Bo XU and Guoqi Li. Spike-driven transformer v2: Meta spiking neural network architecture inspiring the design of next-generation neuromorphic chips. In International Conference on Learning Representations (ICLR), 2024.
Technical Quality: 2
Clarity: 3
Questions for Authors: See weakness
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed comments and suggestions for improvement. We would like to address your concerns and answer your questions below.
> ***Weakness 1**: The application of SEMM on the most state-of-the-art Spiking Transformers such as Spike-driven Transformer V2.*
**AW1**: Following your suggestion, we report the Spike-driven Transformer V2 (SDT V2) with SEMM on CIFAR100 and ImageNet in Tab. R1-1.
### Table R1-1
| Model | CIFAR100(T=4) | ImageNet (T=1)|
| -------- |:--------:|:--------:|
| SDT-V2-15M | 80.40 | 71.80 |
| SDT-V2-15M with SEMM | **81.04** | **73.14** |
After configuring SEMM, SDT V2 demonstrates a consistent performance improvement, but the degree of improvement is not as large as that of SDT V1. This could be attributed to SDT V2 being a hybrid spiking CNN-Transformer framework, where the first two stages are based on spiking convolutions, with the third and fourth stages utilizing spiking transformers. Therefore, compared to SDT V1, the proportion of EMSA and EMSP blocks in SDT V2 is lower, resulting in a smaller gain. We will add these results to the manuscript appendix.
Furthermore, we argue that the SEMM differs from the framework design of Spiking Transformers (Spikformer, Spike-driven Transformer V1/2, and Spikingformer, et al.). SEMM is a universal plug-and-play paradigm that can stably enhance performance. It provides a new direction for improving the Spiking Transformer.
> ***Weakness 2**: Provide evidence demonstrating the effectiveness of using DWConv after MLP.fc1.*
**AW2**: In EMSP, the role of DWConv is to independently extract features at the channel-level experts using a 3x3 receptive field, thereby enhancing representational capacity.
The ablation study about DWConv in Tab. R1-2 demonstrates its effectiveness:
### Table R1-2
| Model | Module |CIFAR100| CIFAR10-DVS|
| -------- | -------- |:--------:|:--------:|
| Spikformer |EMSP | 78.53 | 82.32 |
| Spikformer |EMSP w/o DWConv| 78.17 | 81.30 |
| Spike-driven Transformer |EMSP | 79.81 | 81.10 |
| Spike-driven Transformer |EMSP w/o DWConv| 78.95 | 80.51 |
| Spikingformer |EMSP | 79.81 | 81.95 |
| Spikingformer | EMSP w/o DWConv| 79.44 | 81.20 |
Thanks for your comments, we will include the ablation experiments in the manuscript appendix.
> ***Weakness 3**: Is the ANN-MoE paradigm derived from prior ANN-MoE studies?*
**AW3**: Thank you for your kind reminder. ANN-MoE represents a paradigm derived from previous works on MoE in ANNs [R1, R2], integrating multiple experts through routing and hard sparsification. The referenced ANN MoE works in the manuscript adopt this paradigm, for example, __manuscript line 32 "Mixture-of-Experts (MoE) [20; 21] is known for allowing each expert to learn specific tasks or features, showing better performance" and line 89 "The Mixture-of-Experts (MoE) [28; 29] combines the predictions of multiple specialized experts"__. For clarity, we will append citations to these works following the term 'ANN-MoE' in the manuscript.
> ***Weakness 4**: The description for Figure 3 is not clear enough.*
**AW4**: Fig. 3 depicts the parameter number (X-axis) and performance (Y-axis) of various MoE strategies under multiple Spiking Transformer baselines. Each line consists of four points from left to right, representing:
+ __1__. Baseline
+ __2.__ EMSP
+ __3.__ ANN-MoE paradigm with float-point router matrix (AM-F)
+ __4.__ ANN-MoE paradigm with spiking router matrix (AM-S)
For clarity, we convert Fig. 3 into the following Tab. R1-3. It can be observed that our EMSP achieves better performance with fewer parameters, whereas the application of the ANN-MoE paradigm yields suboptimal results. We will revise Fig. 3 in the manuscript to enhance its readability.
### Table R1-3
| Model | MoE-Type|CIFAR100 Param(M)/ Acc| CIFAR10-DVS Param(M) / Acc|
| -------- |:--------|:--------:|:--------:|
| Spikformer |Baseline | 9.32/77.86 |2.57/80.90
| Spikformer |EMSP | 9.39/78.53 |2.58/82.32|
| Spikformer |AM-F | 23.60/77.35 |5.74/80.80|
| Spikformer |AM-S | 23.60/76.79 |5.74/82.90|
| Spike-driven Transformer |Baseline | 10.28/78.40 |2.57/80.00|
| Spike-driven Transformer |EMSP | 10.30/79.81|2.57/81.10|
| Spike-driven Transformer |AM-F | 22.91/79.64 |5.74/81.50|
| Spike-driven Transformer |AM-S | 22.91/78.89 |5.74/81.09|
| Spikingformer |Baseline | 9.32/79.21 |2.57/81.30|
| Spikingformer |EMSP | 9.39/79.81|2.58/81.95|
| Spikingformer |AM-F | 23.59/79.06 |5.74/81.00|
| Spikingformer |AM-S | 23.59/78.81 |5.74/82.50|
> ***Weakness 5**: In Figure 4, do the four graphs in each sub-figure relate to four images from the same class?*
**AW5**: We speculate that you are referring to Fig. 5, not Fig. 4. The background image is the same for each subplot. We will add this explanation to the manuscript.
[R1] Noam Shazeer, et al. "Outrageously large neural networks: The sparsely-gated mixtureof-experts layer." International Conference on Learning Representations (2017).
[R2] Yanqi Zhou, et al. "Mixture-of-experts with expert choice routing." Advances in Neural Information Processing Systems (2022).
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
To reiterate my previous comments, my primary concern is whether the proposed method is indispensable for enhancing the performance of Spiking Neural Networks (SNNs). As confirmed by the authors, the paradigm is derived from previous works on MoE in ANNs [R1, R2], my perspective is unchanged: the significance of this paper hinges on “**verifying whether techniques developed for Artificial Neural Networks (ANNs) remain effective in the context of SNNs**”. Therefore, the verification of the essentiality of the proposed method is crucial for the significance of this paper.
For this reason, I had previously suggested that demonstrating an improvement over the state-of-the-art (SOTA) method [1] would be necessary. However, I find the authors' results unconvincing for the following reasons:
1. **Model Selection:** The authors chose the smallest model variant, which achieves a performance of 74.1%, whereas the best performance reported in [1] is 80.0% with the largest model.
2. **Experiment Setup:** For the selected model, the highest performance (74.1%) is achieved with 4 time steps. However, the authors used only 1 time step without providing a compelling rationale. This contrasts with their claim that their method is a "universal plug-and-play paradigm that can reliably improve performance". This is quite unacceptable to me. And I would decrease my rating due to the questionable results.
Given the two-week time (including the author-review discussion period), I believe there is sufficient time for the authors to present more convincing results using the appropriate model that achieves the top performance of 80.0%.
---
Rebuttal 2:
Title: Rebuttal by Authors
Comment: Dear Reviewer,
Based on your suggestion, after overcoming significant challenges in resource allocation, we provide a performance report of using SEMM on the largest Meta-SDT-V2 model:
### Table R1-4
| Model | ImageNet(T=1) | ImageNet (T=4)|
| -------- |:--------:|:--------:|
| Meta-SDT-V2-55M DT | 78.00 | 79.70 |
| Meta-SDT-V2-55M KD | 79.10 | 80.00 |
| Meta-SDT-V2-54M with SEMM DT | __79.94__ | __81.80__ |
Here, DT stands for Direct Training, while KD denotes Knowledge Distillation Assisted Training. SDT-V2-54M with SEMM DT shows an improvement of 2.10% over the direct training of the original model, and 1.80% higher than the knowledge distillation training of the original model.
SEMM can still achieve stable performance improvements on the largest Meta-SDT-V2 model while maintaining a similar number of model parameters, demonstrating its generalizability and effectiveness.
The logs, accuracy curves, specific code, and weights have been posted on an anonymous external link:
https://huggingface.co/SDTV2SEMM/Rebuttal-SDTV2-SEMM-55M
Best regards
Authors | Rebuttal 1:
Rebuttal: Dear ACs and Reviewers,
We sincerely appreciate the valuable time and feedback provided by each reviewer. We have responded to each of their comments individually.
In the rebuttal, we use
> ***Weakness/Question/Limitation**: ...*
to summarize the reviewers' comments. Responses start with '__AW, AQ, AL__'.
Best regards
Authors
Pdf: /pdf/71c37335f76d2950de3ab5fb24928b8f784050f6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
I Don't Know: Explicit Modeling of Uncertainty with an [IDK] Token | Accept (poster) | Summary: This paper proposes a training-based confidence calibration method, named IDK-tuning, for improving LLM factuality. Specifically, a special `[IDK]` ("I Don't Know") token is added to the model's vocabulary and an objective is introduced which shifts some probability mass of wrong predictions to the `[IDK]` token during continued pretraining, thereby encouraging the model to express uncertainty explicitly. Results of the IDK-tuned models are reported on commonly used factual benchmarks, showing the potential of this method for reducing hallucination while only suffering a slight sacrifice in terms of knowledge recall.
Strengths: * Confidence calibration and LM hallucination are both important topics and this paper connects them in an interesting way.
* The proposed IDK-tuning method is intuitive and well-motivated. While there have been papers on supervised finetuning for calibration, most of them focus on aligning the model with human demonstration (which requires annotation) or synthetic data (which incur extra cost). The direct adaptation of the training objective that incorporates uncertainty seems quite novel to my best knowledge, and results seem promising with much higher precision and only slightly lower recall, which is suited for the current generally over-confident LLMs.
* The authors perform extensive experiments on the scaling behavior and ablation for different components of the objective function.
* The authors do not overclaim their contribution. Singularities of the experiment results (e.g. `NaN`s in loss and collapsed recall from the `pythia` models) are mentioned and analyzed, and interesting insights are drawn from them.
* The writing is clear and the paper is easy to follow.
Weaknesses: * For multiple-choice QA tasks, the precision improvement as well as the absolute values appear lower than those for the factual sentence completion tasks (comparing Table 1 and 2). Given this, and that the multiple-choice QA tasks might be more applicable, it would be helpful if the authors could report more detailed results on each of the `lm-eval-harness` tasks, and investigate QA or other downstream tasks more carefully.
* Code or tuned model checkpoints are not provided, although some details about the settings and resources are mentioned in the paper.
Technical Quality: 3
Clarity: 4
Questions for Authors: * Is it possible or beneficial to add multiple special tokens to express the different levels of uncertainty that are finer-grained, instead of only a single `[IDK]` token?
* For the evaluation on `TriviaQA` and `PopQA`, why do you reformat them into sentence completion tasks? Is it because in this way the next token(s) can be directly modeled without being extracted from a complete answer?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors highlight several limitations, including the need for full pretraining on large corpus, which is costly, and that the method can slightly sacrifice recall or the overall performance on some downstream tasks such as text generation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their time reading our work, writing a thorough review and bringing up thoughtful comments.
We are encouraged that the reviewer finds our method intuitive, novel and well-motivated, that our experiments are deemed extensive, and that the paper is considered well-written and as not overclaiming the results.
Here are our responses to the weaknesses raised by the reviewer:
- We agree with the reviewer that it would be beneficial to more carefully analyze and represent the lm-eval-harness results. In the camera-ready version, we will add the results of each of the datasets separately and discuss the difference between them to those of the other datasets. Importantly, we note that according to a comment by a different reviewer, we have conducted an in-depth analysis of our model’s mistakes. The setup of the analysis is the following:
We randomly sample 200 examples (out of all the datasets) on which the IDK-tuned model generates a wrong answer (without predicting the [IDK] token). We then categorizes it to one of these four categories: No Effect (both the original model and our model generate the same answer), Noise (the original model knows it, while after our training it doesn’t), White Noise (both the original model and ours don’t know it, though they generate different answers), and Abstain (when our model abstains from answering while generating text like “unknown” or “a mystery”). For this analysis we take three different models: Mistral-7B, Pythia-2.8B and Pythia-70M. The results are the following:
| model | No Effect | Noise | White Noise | Abstaining |
|-------------|-----------|-------|-------------|------------|
| Mistral-7B | 68.5 | 9 | 6.5 | 16 |
| Pythia-2.8B | 59.5 | 13.5 | 12.5 | 14.5 |
| Pythia-70M | 52 | 18.5 | 22 | 7.5 |
These results suggest that first, the bigger the model, the fewer changes our training approach causes in the model’s generations, and second, the bigger the model, the greater its ability to abstain from answering via words (which is generally equal to generating the new [IDK] token, though harder to evaluate automatically).
- We will definitely provide the code and checkpoints with the camera-ready version. Additionally, all the information needed to reproduce our model is demonstrated in the paper.
Here are our responses to the questions raised by the reviewer:
- *“Multiple IDK tokens”*: In principle, the single [IDK] token does cover all levels of uncertainty, as we measure a continuous amount of probability mass put on the token. However, we think multiple different [IDK] tokens are interesting, e.g., discriminating between different “categories” of uncertainty such as lack of knowledge, lack of context etc.
- *“TriviaQA / PopQA reformatting”*: Yes, we perform the reformatting to enable a straightforward evaluation.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer Ny7r
Comment: Thanks for your response and additional results. I decide to keep my score unchanged. | Summary: The paper proposes calibrating LLMs during a continued pertaining phase via an added [IDK] token to model uncertainty.
Strengths: - The paper is structured well and written coherently.
- The introduction of the $\texttt{[IDK]}$ token to explicitly model uncertainty in LLMs is a novel approach
- Through ablation studies, the behavior of the proposed method is investigated extensively.
Weaknesses: - **Theory**:
- The approach is mathematically not very well grounded. Also, mathematical expressions such as $prob(\texttt{[tok]}, \hat{y})$ and $prob(argmax(\hat{y}), \hat{y})$ do not cohere with common practices [1,2,3,4]. The authors could consider something like $p(y_{t}=\texttt{[tok]} | y_{<t}, x)$ and $max_{i} \ p(y_{t}=i | y_{<t}, x)$.
- The uncertainty factor is only bigger than $0$ if any other token gets assigned a higher probability than the $\texttt{[gold]}$ token. It does not account for the case where a model is uncertain about *any* token and thus predicts a (low) probability for all tokens. For instance, consider the tokens relating to ($\text{"Paris"}$, $\text{"Berlin"}$, $\text{"London"}$, $\text{"Rome"}$, $\text{"Vienna"}$). If the model predicts any of $p(y_{t} | \text{"The capital of France is"}) \in [(0.2, 0.2, 0.2, 0.2, 0.2), (0.3, 0.1, 0.2, 0.2, 0.2), ...]$, the uncertainty factor is $0$ no matter the hyperparameter $\Pi$, while it is clear that in all those cases the model is uncertain about the correct next token. The probability of the $\texttt{[IDK]}$ gets even decreased via the uncertainty regularization.
- **Evaluation**: The authors do not compare against other uncertainty quantification methods, such as (length-normalized) predictive entropy [1], p(true) [2], or semantic entropy [3,4]. These methods do not require additional pertaining, and thus do not suffer from training instabilities, mode collapse, or high computational costs, but can directly be applied to "off-the-shelf" models. Additionally, there exist methods that consider fine-tuning models to express their lack of knowledge that have not been considered.
---
[1] A. Malinin and M. Gales. Uncertainty estimation in autoregressive structured prediction.
[2] S. Kadavath, T. Conerly, A. Askell, T. Henighan, D. Drain, E. Perez, N. Schiefer, Z. Hatfield-Dodds, N. DasSarma, E. Tran-Johnson, S. Johnston, S. El-Showk, A. Jones, N. Elhage, T. Hume, A. Chen, Y. Bai, S. Bowman, S. Fort, D. Ganguli, D. Hernandez, J. Jacobson, J. Kernion, S. Kravec, L. Lovitt, K. Ndousse, C. Olsson, S. Ringer, D. Amodei, T. Brown, J. Clark, N. Joseph, B. Mann, S. McCandlish, C. Olah, J. Kaplan. Language Models (Mostly) Know What They Know.
[3] L. Kuhn, Y. Gal, and S. Farquhar. Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation.
[4] L. Aichberger, K. Schweighofer, M. Ielanskyi, and S. Hochreiter. Semantically Diverse Language Generation for Uncertainty Estimation in Language Models.
Technical Quality: 2
Clarity: 3
Questions for Authors: - If an IDK-tuned model is to be aligned, how does the alignment hurt IDK-tuning? Also, if a model is IDK-tuned after alignment, how does the pertaining hurt alignment?
- The method is evaluated on a single model. How do you guarantee that the results generalize to bigger models?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very thankful to the reviewer for their time reading our work and writing a thorough review.
We are encouraged that the reviewer finds our paper to be well written, our approach to be novel, and our ablation experiments to be extensive and to properly demonstrate the behavior of our approach.
We will now address the reviewer's concerns:
## Theory
- For the camera-ready version, we will modify all of the mathematical expressions in the paper for better coherence with previous common practices. Thank you for the suggestions!
- *“The uncertainty factor is only bigger than $0$ if any other token gets assigned a higher probability than the $\texttt{[gold]}$ token”*: We first thank the reviewer for this comment as well, which we strongly agree with. As each of the training runs is very resource-intensive, this idea has not been evaluated in a proper experimental setup. We plan to test a model that has been trained with a slight modification to our proposed training objective, reconsidering the true certainty it has on the gold token, even if this is the maximal probability token among the others. For example, we could take “1 - P(gold token)” as an additional term to be combined in the “Uncertainty Factor” definition, as well as to remove the separation we apply between cases where this factor is zero and the others, in terms of our objective. We will add some initial results of this alternative objective in the camera-ready version. We additionally think this is definitely an exciting follow-up to our work.
## Evaluation
Thank you for this comment. We also believe that evaluating other uncertainty quantification methods might strengthen our work. We thus decided to add the results of the methods suggested by the reviewer to our paper. We will now provide the results of the “P(true)” and the “semantic entropy” methods using the Mistral-7B model, and will add these together with the results of the other models to our camera-ready version of the paper. For the “semantic entropy” method we use SOTA named entity recognition in order to extract only the “answer” itself from the model’s generation. It is important to note though that our method could potentially be applied directly during the normal pretraining of the model, by only modifying the pretraining objective to be ours, and thus the claim that our method requires additional training would not apply. Moreover, the “Semantic Entropy” method indeed does not require any additional training, though it requires significantly more inference calls and some post-processing of the generations (clustering etc.).
The results are the following:
- **LAMA Google-RE**:
| model | Precision | Recall | F1 |
|-------------------------------------|-----------|--------|------|
| Mistral-7B | 48.1 | 48.1 | 48.1 |
| Mistral-7B + The Pile | 48.8 | 48.8 | 48.8 |
| Mistral-7B + Confidence Threshold | 60.0 | 40.0 | 48.0 |
| Mistral-7B + P(true) | 54.4 | 44.5 | 48.9 |
| Mistral-7B + Semantic Entropy | 70.1 | 38.9 | 50.0 |
| Mistral-7B + IDK Tuning on The Pile | 71.1 | 40.6 | 51.7 |
- **LAMA T-Rex**:
| model | Precision | Recall | F1 |
|-------------------------------------|-----------|--------|------|
| Mistral-7B | 71.2 | 71.2 | 71.2 |
| Mistral-7B + The Pile | 69.9 | 69.9 | 69.9 |
| Mistral-7B + Confidence Threshold | 80.4 | 63.5 | 71.0 |
| Mistral-7B + P(true) | 73.8 | 65.1 | 69.2 |
| Mistral-7B + Semantic Entropy | 88.0 | 65.4 | 75.0 |
| Mistral-7B + IDK Tuning on The Pile | 88.5 | 65.5 | 75.3 |
- **LAMA SQuAD**:
| model | Precision | Recall | F1 |
|-------------------------------------|-----------|--------|------|
| Mistral-7B + P(true) | 54.9 | 41.0 | 46.9 |
| Mistral-7B + Semantic Entropy | 70.2 | 44.5 | 54.4 |
| Mistral-7B + IDK Tuning on The Pile | 72.0 | 44.3 | 54.9 |
- **TriviaQA**:
| model | Precision | Recall | F1 |
|-------------------------------------|-----------|--------|------|
| Mistral-7B + P(true) | 58.8 | 47.5 | 52.5 |
| Mistral-7B + Semantic Entropy | 68.5 | 52.5 | 59.4 |
| Mistral-7B + IDK Tuning on The Pile | 72.5 | 52.0 | 60.6 |
- **PopQA**:
| model | Precision | Recall | F1 |
|-------------------------------------|-----------|--------|------|
| Mistral-7B + P(true) | 40.3 | 29.0 | 33.7 |
| Mistral-7B + Semantic Entropy | 68.7 | 20.4 | 31.5 |
| Mistral-7B + IDK Tuning on The Pile | 78.1 | 20.5 | 32.5 |
These results suggest that our approach still leads to the best precision and f1 scores compared to the new baselines too, though the gaps are smaller compared to the ones against the previous baselines.
## Questions
- *"Combining IDK-tuning with alignment”*: While we do not explicitly study this in our work, we do believe it is an interesting direction for future work. Specifically, the line of work on Task Arithmetic [1] will be interesting: Can we extract and combine the “IDK” weight vector with an instruction-tuned model?
- *"“The method is evaluated on a single model” / “generalization to bigger models”*: We believe this question must be a misunderstanding, as we conduct very extensive experiments with eight different models, spanning encoder-only (BERT) and decoder-only architectures, as well as an explicit study of scaling behavior using the Pythia model suite and Mistral-7B.
In light of this, we kindly request you to reconsider and appropriately raise the score of your review if we have sufficiently addressed some or all of your concerns.
---
Rebuttal Comment 1.1:
Title: Theoretical and Performance Concerns
Comment: Thank you for the rebuttal.
> *Review*: **The uncertainty factor is only bigger than 0 if any other token gets assigned a higher probability than the **[gold]** token. It does not account for the case where a model is uncertain about any token and thus predicts a (low) probability for all tokens.**
> *Rebuttal*: We first thank the reviewer for this comment as well, which we strongly agree with. As each of the training runs is very resource-intensive, this idea has not been evaluated in a proper experimental setup.
The main contribution of your work is the introduction of an objective function that shifts the probability mass to the **[IDK]** token for incorrect predictions. This is not just an "idea" from my perspective; it reveals a significant theoretical flaw in your work.
Given that your method lacks a solid theoretical foundation and the empirical performance only shows marginal improvements over the current state-of-the-art uncertainty quantification methods, coupled with major drawbacks such as unknown behavior when combined with alignment, I must reject the paper at this point.
Addressing the theoretical issues will presumably enhance the performance of your method. Also, insights into how your method can be effectively applied to instruction-tuned models will improve the work.
---
Reply to Comment 1.1.1:
Comment: Thank you for your quick response!
*Regarding the theoretical issue that has been mentioned*:
The behavior the reviewer describes as a flaw is actually what we aim for - as long as the model "knows" the answer, namely its maximal token predication is correct, we want to encourage it and raise even more its confidence on it, while also decreasing its "uncertainty" (the probability it puts on the [IDK] token). Thus, we claim that the fact that the uncertainty factor is bigger than 0 only if any other token gets assigned a higher than the gold token is not actually a theoretical flaw – enabling this would provide a signal to predict [IDK] when the model does in fact know the answer. Therefore, implementing this in the way the reviewer suggested is also valid but would create a subtly different objective which stands for a subtly different goal.
Additionally and very importantly – the semantic entropy method for calibration requires way more inference calls (might be more than 10 times more), and an external additional clustering method run. We think this is an important point to consider while looking at the results – even though our method is only "marginally" better, it is applied once during pretraining and doesn't add any more computation effort during inference time at all.
To sum up, we will extend the introduction to discuss these points. We do believe that in practice our technique proves useful to reduce hallucination, which is a very important societal challenge in dire need of further advances.
*Regarding the alignment point mentioned by the reviewer*:
It is well-known that alignment techniques disrupt the probability estimates delivered by large language models, so this is not an issue unique to our specific paper. Our paper suggests a language modeling objective that encourages uncertainty expression via a new [IDK] token, and thus we believe it should be applied during or immediately after the initial pretraining. However, we do believe that our method could be complemented with further techniques for better post-alignment uncertainty modeling. We believe this is an exciting line of future work, and we will add this point to our discussion section. | Summary: To allow LMs to express their uncertainty for generative tasks, the authors introduce a new special IDK token. The authors modify the cross-entropy training objective to assign part of the probability mass to the IDK token in cases where the model gets the prediction wrong. The token embedding is randomly initialized and then refined through additional fine-tuning of pretrained LLMs of various sizes and types (Pythia, Mistralv1, BERT).
Through experiments on a range of completion, QA & MCQA datasets the authors show that IDK tuning positively affects precision at a slight cost of recall. The authors perform further ablation experiments solidifying their choices for the loss weight hyperparameter as well as the regularization term. The authors further show that IDK tuning does not significantly adversely affect other capabilities of the underlying LMs.
Strengths: - Well written and easy to follow
- To the best of my knowledge, presents a novel approach to quantifying uncertainty in LMs
- Exhaustive experimental evaluation & ablation over different hyperparameter choices demonstrating robustness of the proposed approach
Weaknesses: - The IDK-tuning setup requires sometimes prohibitive additonal fine-tuning of the base LM
- It is unclear how the method would be applied when a model would be pretrained from scratch with the IDK token included - it is natural that LMs will be worse in predicting tokens as initial stages of training, so the pretrain - add IDK - tune paradigm seems as the only current option. The two previous points slightly limit the applicability of the metod.
- While the reported F scores are generally higher compared to baselines and alternatives, the IDK-tuned models still suffer from tangibly lower recall.
Technical Quality: 3
Clarity: 3
Questions for Authors: None
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We first express our sincere gratitude to the reviewer for their time reading our work, writing a thorough review, and bringing up thoughtful comments.
We are encouraged that the reviewer finds our paper to be well written, our approach to be novel, and our experiments to be extensive and to properly demonstrate the robustness of our approach.
Here are our responses to the concerns raised by the reviewer:
- While our method does require finetuning, we also benefit from the vast amount of current work in the optimization and efficiency of LLM training. In any case, we argue that (moderate) finetuning does not constitute a technical weakness of our method.
- While we experiment with IDK-tuning for the reasons you mention, we believe integrating the [IDK]-flavor of uncertainty-aware training into pretraining from scratch presents an exciting direction for future work (e.g., via loss weight schedules). We do believe that with small adaptations of our loss function, our training method could be applied for pretraining from scratch. One of the main reasons we couldn’t check this effectively is the very extensive resources (time, compute and memory) these controlled experiments in this setting would have consumed (which would be impractical or impossible on typical academic budgets in a reasonable amount of time). Also, we believe this direction of work warrants a detailed study that will fill another whole paper.
- We agree that our method is not “free” (as it leads to somewhat lower recall), but argue that it presents a valuable tradeoff, given the importance of reducing hallucinations.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I will keep my score unchanged as it still accurately reflects my feelings on the paper. | Summary: It introduces a novel method to address the issue of hallucinations in Large Language Models (LLMs). These models, despite their proficiency in capturing and generating human knowledge, can sometimes produce factually incorrect text. To combat this, the authors propose a calibration method that incorporates an [IDK] ("I don’t know") token into the model's vocabulary.
Strengths: The introduction of the [IDK] token is a creative solution to an existing problem in LLMs. It represents a novel way to handle uncertainty, which is not just a new definition or problem formulation but also a practical application within language models.
The paper proposes a new pretraining objective that modifies the conventional cross-entropy loss to incorporate the [IDK] token. This is an original contribution to the field of natural language processing.
Weaknesses: The paper primarily uses The Pile for training, which may not be representative of all possible language use cases or domains.
While the paper provides a good overview of the performance metrics, an in-depth error analysis could offer more insights into the types of errors the models are making and how the [IDK] token impacts these.
As the model is trained on web-crawled data, there is a risk of learning and perpetuating societal biases present in that data.
Technical Quality: 2
Clarity: 3
Questions for Authors: See Weaknesses
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: While the paper notes the potential for bias in the training data, it could provide more details on how this might affect the model's predictions and decision-making.
The paper could more explicitly discuss the potential for the model to contribute to the spread of misinformation, especially if it fails to correctly identify uncertain or incorrect information.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We first highly thank the reviewer for their time reading our work, writing a thorough review and bringing up thoughtful comments.
We are encouraged that the reviewer finds our method a creative and novel method to handle an existing and important problem current LLMs have - which is generating misinformation and hallucinations.
We would like to address the following weaknesses / limitations of our work raised in the review:
- **“Using primarily The Pile”**: We argue that using a general, English-centric dataset is well suited to demonstrate the efficacy of our method and not an inherent weakness of our method. Additionally, as you may assume, each of these training sessions is extremely resource-intensive. Exploring other languages and domains is a good direction for future work.
- **“In-depth error analysis missing”**: We do provide a very detailed analysis and ablation of precision-recall tradeoffs in Section 4.2. Based on the reviewer’s suggestion, we decided to conduct an in-depth error analysis of our model’s mistakes. We will provide the results here and will add them to the camera-ready version of the paper too. The setup of the analysis is the following:
We randomly sample 200 examples (out of all the datasets) on which the IDK-tuned model generates a wrong answer (without predicting the [IDK] token). We then categorize these to one of four categories: No Effect (both the original model and our model generate the same answer), Noise (the original model knows it, while after our training it doesn’t), White Noise (both the original model and ours don’t know it, though they generate different answers), and Abstain (when our model abstains from answering while generating text like “unknown” or “a mystery”). For this analysis we take three different models: Mistral-7B, Pythia-2.8B and Pythia-70M. The results are the following:
| model | No Effect | Noise | White Noise | Abstaining |
|-------------|-----------|-------|-------------|------------|
| Mistral-7B | 68.5 | 9 | 6.5 | 16 |
| Pythia-2.8B | 59.5 | 13.5 | 12.5 | 14.5 |
| Pythia-70M | 52 | 18.5 | 22 | 7.5 |
These results suggest that first, the bigger the model, the fewer changes our training approach causes in the model’s generations, and second, the bigger the model, the greater its ability to abstain from answering via words (which is generally equal to generating the new [IDK] token, though harder to evaluate automatically).
- **“Bias from web-crawled pretraining data”**: We agree and will use the extra page to discuss this in an extended Limitations section. However, while important to consider, we argue that pretraining data bias is (1) mostly inherent to the way LLMs are trained nowadays and (2) not a particular weakness of *our method* but an exciting direction for follow-up work (e.g., using synthetic data).
- **"Potential for the model to contribute to the spread of misinformation"**: Indeed our extended Limitations section will also discuss the risk of misinformation. It is important to stress that we propose a single *method*, not a *system design* for safe deployment of LLMs. In practice, we anticipate our method to be coupled with other checks and balances, forming a *safe system*.
In light of this, we kindly request you to reconsider and appropriately raise the score of your review if we have sufficiently addressed some or all of your concerns. | Rebuttal 1:
Rebuttal: We thank all reviewers for their time and effort put into providing a thorough review of our work. We briefly highlight the strengths of our work as identified by the reviewers:
- Our [IDK] token approach is a “novel approach” deemed “an original contribution to the field of natural language processing”, “well-motivated”, and “creative”.
- The reviewers have praised our extensive experiments and ablation studies, while not overclaiming our results.
- Our paper is well-written and easy to follow.
In the individual responses, we address all points and questions raised by the reviewers in detail. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Leveraging Visual Tokens for Extended Text Contexts in Multi-Modal Learning | Accept (poster) | Summary: Extending context length is a fundamental challenge for LLMs. Unlike previous approaches that focus on efficiently handling text tokens, this paper introduces a novel method: encoding lengthy text information into image renderings. These image tokens effectively increase the context length and enhance the performance of LLMs across various downstream tasks.
Strengths: The idea is simple yet surprisingly effective. Previous works like PIXEL and CLIPPO suggested treating text as images to eliminate the need for tokenizers and unify various languages into a single image format. In contrast, this paper uses image rendering to encode lengthy texts, employing a PIXEL-like method to enhance long-context understanding.
Weaknesses: 1. Information loss from image rendering
The current approach has two potential sources of information loss: 1) rendering long texts into an image, and 2) encoding an image into MLLM embeddings. This loss is not thoroughly investigated in the paper. For instance, Table 2 shows that using the original lengthy 426 tokens outperforms the proposed rendered image, albeit at higher computational costs. While it is acceptable to trade some performance for efficiency, the trade-off should be clearly demonstrated in the paper.
---
2. Optimal compression rate
Due to the potential information loss, rendering very long texts into a single high-resolution image might not be optimal. For a context of, say, 2048 tokens, what is the best approach: a single image with 2048 words, or 32 images with 64 words each? The current ablation study only examines the rendering aspects like font size and font interval threshold. However, the trade-off between the number of images and the number of words per image is a crucial study that should be included in the paper.
---
3. Comparison with other long context methods
A significant weakness of the paper is the lack of comparison with other long context methods. Specifically, the paper must compare the proposed compression-into-image approach with compression-into-text approaches, such as [1-3]. Indeed, text compression loses the original semantics, while image rendering retains all previous words. Therefore, combining both approaches—compressing the semantics first and then rendering them into an image—could potentially offer the best of both worlds.
In addition to the text compression approach, it would be beneficial to discuss the pros and cons of this work in comparison with other long context methods. The current paper only discusses classic efficient self-attention models like Longformer. However, there are more recent and diverse approaches, such as handling long sequences by dividing them into multiple chunks [4-5] or using interpolation positional encodings [6-7], among others.
[1] Mu et al. Learning to Compress Prompts with Gist Tokens. NeurIPS 2023.\
[2] Chevalier et al. Adapting Language Models to Compress Contexts. EMNLP 2023.\
[3] Ge et al. In-context Autoencoder for Context Compression in a Large Language Model. ICLR 2024.\
[4] Bertsch et al. Unlimiformer: Long-Range Transformers with Unlimited Length Input. NeurIPS 2023.\
[5] Song et al. Hierarchical Context Merging: Better Long Context Understanding for Pre-trained LLMs. ICLR 2024.\
[6] Chen et al. Extending Context Window of Large Language Models via Positional Interpolation. arXiv 2023.\
[7] Li et al. Functional Interpolation for Relative Positions improves Long Context Transformers. ICLR 2024.
Technical Quality: 3
Clarity: 4
Questions for Authors: This paper claims that long texts can be converted into images. If that's the case, do we need text tokens at all? Can we replace all the text with images, instead of just some prefixes as is currently done?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: Discussed, but preliminary. The paper only addresses a minor technical limitation regarding static vs. dynamic tokenization of images. However, there are many more potential limitations to consider. For example, I wonder if the current approach would also be effective for larger models, such as Llama-3, which has a longer context length of 8192.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Information loss from image rendering and trade-off between performance and computation cost:**
**1.Information Preservation in Rendering:**
The rendering process preserves all the words in the text image, ensuring that no textual information is lost during this step.
**2.Information Encoding:**
Evaluating potential information loss during the embedding step is challenging as it operates at the feature level.
**3. Performance vs. Computational Cost Trade-off:**
We acknowledge the trade-off between performance and computational efficiency. Using the original lengthy 426 tokens can outperform the rendered image for inference, but at a higher computational cost. To further investigate this trade-off, we conducted additional pre-training experiments using the smaller Mistral-7B model. We show the result below and find that with the same in-context text length and batch size, our method shows a slight decrease in performance (e.g., from 25.4 to 24.6 on average). However, our approach allows for `longer in-context lengths and larger batch sizes with the same computational resources`. When utilizing these advantages, the mean accuracy of our method clearly outperforms the baseline.
| **Method** | **ICL Length** | **BSZ** | **(okvqa)** | **(textvqa)** | **(vizwiz)** | **(vqav2)** | **Caption (coco)** | **Caption (flickr)** | **Mean** |
|------------------|----------------|---------|------------------|-------------------|------------------|-----------------|--------------------|----------------------|----------|
| **9B Baseline** | 512 | 16 | 17.1 | 14.8 | 21.5 | 26.5 | 40.1 | 32.1 | 25.4 |
| **+VisInContext** | 512 | 16 | 16.3 | 15.1 | 20.3 | 25.4 | 39.1 | 31.5 | 24.6 |
| **+VisInContext** | 512 | **64** | **18.5** | 17.4 | 22.3 | 27.0 | 41.2 | 31.8 | 26.4 |
| **+VisInContext** | **4096** | 16 | 18.3 | **19.3** | **22.5** | **28.4** | **42.3** | **34.8** | **27.6** |
> Caption: We compare the zero-shot evaluation of the OpenFlamingo-9B baseline model with our VisInContext. In this context, BSZ stands for batch size, and the prefix text token length is 128. With the same GPU memory, VisInContext supports a larger batch size or a longer in-context learning (ICL) length.
Due to time limit of rebuttal phase, we only conducted one analysis and will include more detailed discussion on this trade-off in the revised version of our paper.
**Q2. Optimal compression rate:**
Our current implementation is already multiple text image inputs for longer texts by splitting the text into several images. This allows us to process texts longer than what a single image can accommodate. In Figure 2, we depicted only one image for simplicity and clarity.
The changes in `font size and font interval directly correspond to the number of text images` required. For example, using a smaller font size results in fewer images needed to render the same amount of text. This relationship indicates that adjusting font size and interval settings effectively controls the number of images used.
**Q3. Comparison with other long context methods:**
Thank you for insight comment.
[1]: This method involves an additional fine-tuning stage and modifies the cross-attention mask of the transformer.
[2]: This method reuses the language model multiple times to obtain a summary vector for each segment and the training pipeline is very different.
[3]: This approach requires an additional instruction fine-tuning stage on specific instruction data to produce the desired output.
These methods focus on re-use the pre-trained model by fine-tune on specific learning objective[1,2] / instruction data[3]. While these methods are not directly compatible with our pre-train (also increase ICL in this stage) and few-shot evaluation pipeline, these works have inspired us significantly. We are particularly interested in adopting the strategy from [2] during the inference stage for further increase the ICL. However, the code is hard to modify under our codebase and we are still working on it.
Regarding the pros and cons of other long context methods, we will expand our discussion in the revised manuscript to include:
[4] and [5]: We will discuss these techniques in lines 287-289. For [6] and [7], we will cover these approaches in lines 290-292, which use positional interpolation to extend the context window.
**Q4: Do we need text tokens?**
At the current stage, our method still relies on partial text input to perform `next-word prediction and uses autoregressive language modeling loss to optimize the model`. In this way, we can test downstream tasks directly by predicting next word. Completely replacing text tokens with images would require developing a unified learning target beyond the current contrastive loss. This is a complex challenge and an area of ongoing research.
**Q5: If current method suit for larger models such as Llama-3:**
The current approach is indeed suitable for larger models like Llama-3 405B, which has an in-context length of 8192 during pre-training (enabled by techniques like pipeline parallelism on 16 H100 GPUs and low bit quantization). `These techniques are applicable to both vision and language models`, allowing vision tokens to scale similarly.
Furthermore, the parameters of the vision model are only a fraction of the overall model parameters, making it feasible to include more visual tokens without significant overhead. For instance, the Llama-3 405B vision-language model incorporates a ViT-H vision encoder with 630M parameters, demonstrating the compatibility of our method.
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal
Comment: Thank you for the rebuttal. As other reviewers noted, this paper challenges the common belief that "image space typically contains more redundancy than semantic space" and argues that images can serve as better tokenizers than texts. Given the ambitious nature of this claim, it may be more challenging to convince people.
However, the proposed concept of rendering images as text opens up several intriguing research directions. For example: 1) What is the optimal way to tokenize the combination of images and text? 2) How can we effectively train large models with these tokens, considering the challenges in next-token prediction?
For these reasons, I believe this paper could make a valuable contribution to the conference, sparking new ideas among attendees, and I am inclined to maintain my original rating of acceptance.
---
Reply to Comment 1.1.1:
Comment: Thanks for your timely and positive comment!
We want to emphasize that the idea of **challenging the common belief about image redundancy** is `not our primary claim` but rather an interpretation by the other reviewer. Our discussion is focused solely on rendered text images, which consist of white backgrounds and dark text, rather than real-world images.
Also, we `do not claim to replace text tokenizers entirely`. Instead, our work explores how rendered text images can be utilized within the scope of multimodal large language models (MLLMs) to increase in-context text length in novel ways. | Summary: The paper proposes a method to increase the context size of multi-modal large language models. The goal is to increase the context with minimal GPU memory for both training and inference as well as floating pointing operations. Finally, the authors show that the method obtains good results on in-context learning.
Strengths: The paper tackles an important problem and can be of interest to the research community. It seems to achieve promising results and the idea seems interesting, but there are parts that are unclear or need better studying.
Weaknesses: I think that the paper is hard to follow and some parts hard to understand. Please see below for some questions
My other concern is related to the experimental part, that I find a bit weak. The comparison is made against only two existing models and I don't clearly understand the numbers. Firstly, one concern is that there seems to be a degradation in performance with 0 shots in some cases.
Also, probably I am missing something, but let's look at Tab. 1. Wouldn't an increase context size (ICL 256 vs 2048) would allow for more shots to be used? I don't understand what is different in terms of the theoretical context between 0 shot Open-Flamingo and VisInContext. Isn't the same information fed to the model? If yes, then I don't see how the experiment measures the impact of the context size. Or is it just way to feed more information from a particular document as opposed to sub-sampling it? In the later case, I would consider this to be a slightly unfair comparison and I don't really understand what are the benefits of the model if in the zero-shot setup where you have access to a larger portion from the document (as opposed to subsampling) the performance drops quite a lot in some cases (okvqa, vizwiz). This shows that the way that the context is processed is not ideal.
Technical Quality: 3
Clarity: 2
Questions for Authors: Questions related to lines 75-76.
1. Both images and text are concatenated to result in a sequence of 256 tokens?
2. Is m chosen so that the concatenation is 256 tokens?
3. What does the corresponding text represent? Based on Fig. 2, `I_i` seems to be an actual image and not an image representing the text, so I am confused. Or the bear image has nothing to do with `I_i`? If it doesn't where do the image tokens `I_i` are used in Fig 2?
Questions lines 81-82.
4. Is this the same m as above? If yes, can you elaborate what the m represents? I assume no, since it's `M` vs `m`, but I think other letters can be used to make this more clear
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The limitations are briefly discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Questions related to line 75-76.**
_i. Clarification on Token Concatenation:_
Notice that <visualx> is used as `a placeholder to indicate the position of an image` within the token sequence. It has a `token length of 1`. For example, in a sequence of 256 tokens, the structure could be <visual1><text1><visual2><text2><visual3><text3>, where the visual placeholders are interleaved with text tokens. In this example, 253 tokens out of the 256 are text tokens. If the text exceeds the length limit, it will be truncated during preprocessing.
_ii. Pre-defined Hyperparameter $m$:_
The hyperparameter $m$ is predefined and set to a specific value, which is 3 in this work. It indicates the number of images included in a sequence. If there are not enough images to meet this value, zero tensors are used as padding to ensure the sequence length remains consistent at 3.
_iii. Explanation of Corresponding Text and Image Tokens:_
The interleaved dataset, such as MMC4, includes multiple images and their corresponding texts. The "corresponding text" refers to the text paired with a specific image. In our notation, $I_i$ represents the $i$-th raw image. In Figure 2, visual information from both raw images and rendered text images is integrated into the LLM using a `cross-attention mechanism`. In this mechanism, `the text tokens act as queries, while the keys and values are the visual tokens derived from the images, after applying token masking`. Notice that $I_i$ only does self-attention with $T_i$ according placeholder <Visual i>.
**Q2. Questions about Line 81-82:**
The $M$ in these lines represents the number of rendered text images used.
It is different from $m$, which is used to denote the number of raw images.
We will use a different letter in the revised version.
**Q3. Do not understand Table 1:**
_1. The details of ICL:_ The evaluation pipeline for MLLMs typically consists of a pre-training stage followed by a fine-tuning or zero-shot evaluation stage on downstream tasks. The "In-context Text Length" mentioned in Table 1 refers to the `pre-training stage (Lines 160-162)`. For downstream tasks, we ensure a fair comparison by maintaining the same number of shots and input settings across all models. This means that the settings for `both the baseline model and our method are identical during downstream evaluation`, allowing us to accurately assess the benefits of longer context pre-training. Our method `naturally supports longer shots during inference (Figure 4)`; however, comparing results between significantly different shot numbers, such as 128 shots versus fewer shots, would be unfair as more shots leads to better result in general. Therefore, we do not include these comparisons in Table 1.
_2. Seems to be a degradation in performance with 0 shots in some cases, okvqa and vizwiz drops a lot :_ Each dataset has its own biases and evaluates different capabilities of the models. It is common for results to be slightly below the baseline in some datasets, but the overall mean performance is significantly better than the baseline. Please refer to Q4 for additional information on RVzob as a supplementary resource regarding the instability associated with increased shots.
**Q4. Comparison against only two models:**
Flamingo and Fuyu are two strong and representative methods: Flamingo represents models with visual encoders, while Fuyu represents models with only linear embeddings. All MLLMs fall into one of these two categories.
---
Rebuttal Comment 1.1:
Title: Rebuttal answer
Comment: Thank you for the rebuttal! I confirm I have read the rebuttal and I currently don't have other questions. The rebuttal brings more clarity and answers my questions, but I still think there might be a problem of clarity. Hence, I slightly raise my score.
---
Rebuttal 2:
Comment: Thank you for your prompt feedback and for raising the score! We are glad that our rebuttal brings more clarity and answers all your questions. We would love to resolve any remaining clarity problems, if you could elaborate more about “there might be a problem of clarity”.
---
Rebuttal Comment 2.1:
Comment: Hey! Sorry for not being more clear. I am referring to the original questions where I think for most of them, parts of the response from the rebuttal need to be included in the revised paper for added clarity. For example the explanation around Table 1, parameter m and the other parts. | Summary: This paper introduces a method called Visualized In-Context Text Processing (VisInContext) to address the challenge of processing long in-context texts in multimodal learning, which arises due to the high GPU memory and computational costs associated with lengthy textual content. VisInContext converts long textual content into images and uses a visual encoder to extract textual representations, thereby increasing the in-context text length that can be processed effectively. The method is based on a dual-stream encoder model that employs Token Masking and Text-Centric Contrastive Learning (TCCL) to improve the model's ability to learn from the rendered text images, and the paper demonstrates the effectiveness of VisInContext through experiments on various tasks, showing that it outperforms the baseline in terms of performance and inference cost, while also improving the optical character recognition (OCR) ability of the model.
Strengths: 1. The paper is in well-written, which makes it easy to understand.
2. VisInContext can reduce GPU memory usage and FLOPs for both training and inference, allowing the model to handle much longer text contexts with lower computational cost.
3. The model trained with VisInContext delivers better performance on common downstream benchmarks for in-context few-shot evaluation and document understanding.
4. The method shows potential in document understanding tasks, as evidenced by the improvements on DocVQA and OCR VQA datasets, and the enhanced next-word prediction accuracy of the LLM on the Rendered Text dataset.
Weaknesses: 1. The proposed method involves multiple steps, including text rendering, token masking, and contrastive learning, which might add complexity to the implementation. This could be a barrier to adoption for some practitioners.
2. From my perspective, the pipeline of "text -> text-rendering -> LLM" seems somewhat derivative from the initial purpose of LLMs (Large Language Models). That is, one may question whether we truly need such a complicated paradigm. If the answer is yes, then this approach appears more akin to an engineering exercise. Further, the performance may be affected by the selected font type, font size, and even the rendered canvas.
3. The process of rendering text into images and then processing these images through a vision encoder may introduce new bottlenecks, particularly as the text length increases. The paper could explore these potential limitations or trade-offs in greater detail.
Technical Quality: 3
Clarity: 3
Questions for Authors: refer to weaknesses
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: My primary concern is the necessity of the "text -> text-rendering" process. Perhaps it would be beneficial to explore a new perspective that focuses on aligning visual and textual information more effectively. However, as we know, image space typically contains much more redundancy than semantic space. For the current version, I am not fully convinced that projecting text into the image space is the optimal approach. Furthermore, rendering text may introduce several additional complications that warrant careful consideration.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Complexity of Implementation and Barrier to Adoption:**
It is not true, and we would like to emphasize that our implementation is not only simple but also easy to adopt for future works.
**i. Text Rendering:** This step is performed during the preprocessing phase on the CPU and can be implemented in OpenCV with just a few lines of code. Specifically, our implementation requires only 5 lines of code.
**ii. Token Masking:** This is a straightforward selection process that can be implemented in a single line of code.
**iii. Contrastive Loss:** The computation of the contrastive loss is performed on the average token representations, which is a standard and easily implemented technique.
Overall, our method is designed to be simple and straightforward, ensuring it is accessible to practitioners without adding undue complexity.
**Q2. Derivation from the initial purpose of LLMs; Need for such a complex paradigm; Engineering exercise:**
We disagree with your assessment. Here are the key points:
**1.Clarification on Purpose:** It is not clear what you mean by the "initial purpose" of LLMs. Our focus is on `Multi-modality Large Language Models (MLLMs)`, not solely LLMs. Our VisInContext method significantly increases the in-context text length for MLLMs during both the pre-training and inference stages.
**2. Clarification on why we need such complex paradigm:** Increasing the in-context text length is a crucial challenge, particularly during pre-training, while most works on LLMs require fine-tuning or post-tuning to support longer contexts. There is no prior work addressing this topic in MLLMs. It is widely recognized that vision encoders are significantly smaller than LLMs. Therefore, processing longer but not highly related text with a visual encoder at a very modest computational cost is an efficient approach. Actually, compared to rendered text images, `text tokenizers in LLM require numerous human-defined preprocessing steps` such as lowercasing, punctuation removal, stop words removal, tokenization, and more—generally involving almost ten steps. In this context, `rendered text images offer a much simpler paradigm` for processing text.
**3. Clarification on engineering exercise:** We use a fixed font and a simple white background. While font size does affect performance, we have already discussed this in our experiments. Our method significantly increases the in-context text length for MLLMs, representing a substantial improvement. This is not merely an engineering exercise but a `strategic enhancement to the model's ability to process long text`. Additionally, it is unclear what you mean by "engineering exercise".
In conclusion, our method addresses a critical need in MLLMs and provides tangible benefits, demonstrating its value beyond a simple engineering exercise. For MOE based MLLM, we increase the in-context text length from 256 to 2048 during pre-training stage.
**Q3. Potential Bottlenecks of Using a Vision Encoder and Discussion of Limitations:**
We argue that both the LLM and vision encoder have limitations in terms of processing long text sequence, especially for the purpose of extended context for multimodal understanding. Below we summarize the pros and cons of using vision encoder to process long text sequence.
**1. Size Comparison:** The vision encoder in an MLLM is typically much smaller compared to the LLM itself.
For example, the vision encoder is 340M but the LLM is 56B for MOE-based MLLM.
This size difference suggests that the vision encoder is a more economical way to process text information.
**2. Efficiency:** Rendered text images can encode more text within the same number of tokens (Lines 98-99), making this method efficient for handling longer contexts.
**3. Complementary Technique:** Our method is complementary to existing techniques for increasing in-context text length in LLMs. It provides an additional means to enhance model performance without replacing current methods.
**4. Limitations and Trade-offs:** We acknowledge potential limitations, such as dynamic token handling, which are discussed in Lines 302-304. However, our experimental results show that the benefits of our approach outweigh this limitation.
**Q4. Concerns about redundancy in image space and the necessity of the "Text -> Text-Rendering" Process:**
We disagree with the notion that projecting text into image space is suboptimal due to redundancy.
**1.Redundancy in Real-World Images vs. Rendered Text Images:** While real-world images often contain redundancy because adjacent patches look similar, include colors, textures, background noise, which can be redundant and not directly related to the underlying semantic meaning. But for rendered text image, since the image is primarily composed of meaningful text with minimal background details, it `closely mirrors the semantic density of the text` itself and significantly `reduces the typical redundancy` found in image space.
In addition, process text with image have following advantages:
**1. Efficiency of Image Tokens:** In our method, the number of image tokens is fewer than the equivalent number of text tokens for the same length of input (Line 98-99). This demonstrates that processing text in image space can be more efficient than using text tokens alone.
**2. Economic Efficiency:** Our approach shows that preprocessing text into image space is more economically friendly compared to relying solely on text models. For example, flops in Figure 1.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer zUCF,
the discussion period draws to a close, could you please check and reply to the authors' response? Please also revise your score accordingly if needed.
Sincerely,
your AC.
---
Rebuttal Comment 1.2:
Comment: Thank you for the explanation. I have reviewed the response and comments raised by other reviewers. In my initial comments, the term "derivative" was actually meant to be "deviated". Apologies for the typos.
I still have some confusion about the exact helpful clues or information captured by contrastive learning. From the results, I can observe that it truly brings improvement. However, I am wondering what kinds of cases can be improved with and without the TCCL approach. It would be helpful if you could provide some comparison examples, such as cases where TCCL leads to significant improvements versus cases where it does not provide as much benefit. Concrete examples illustrating the strengths and limitations of the TCCL method would give me a clearer understanding.
---
Reply to Comment 1.2.1:
Comment: Contrastive learning in our method is designed to make the vision encoder and resampler work together as a `"visual text tokenizer."` This means it encourages the embeddings of rendered text images to align with those of regular text tokens, allowing them to capture similar overall meanings. With contrastive learning, we observe:
**Improved Next Word Prediction Accuracy:** We observed that models trained with contrastive learning `perform better in predicting the next word on the Rendered Text Image dataset`. This improvement indicates that the model has a better understanding of text and stronger OCR (Optical Character Recognition) capabilities. For example, in Figure 3 the validation OCR accuracy (val_ocr_average_acc1) drops significantly from 85.25 to 74.67 when contrastive learning is removed.
**Enhanced Performance on TextVQA:** According to Table 6, the VisInContext model with contrastive learning shows a significant improvement on the TextVQA dataset, with accuracy increasing from 18.3 to 21.8. This dataset requires the model to `read and understand text within images to answer questions`, highlighting the model's enhanced ability in text-based reasoning when contrastive learning is applied.
One potential limitation of the model with contrastive loss is its tendency to struggle in scenarios `where the image contains a significant amount of irrelevant or misleading text`, which can lead to incorrect interpretations. To illustrate this, we conducted a simple experiment using images with false text from the Typographic Attack dataset, as discussed in Section 7.2 of [1]. In this experiment, we posed a basic QA task asking, 'What is in this image?' The results showed that **the model with contrastive loss exhibited a higher language model (LM) loss**. For example, when presented with an image of a cup labeled with the misleading text 'iPad,' the model ` incorrectly responded with 'A blue iPad.'`
Considering we are unable to provide figures at this stage to aid understanding, we believe these examples effectively demonstrate that contrastive learning is particularly beneficial in cases where understanding text within images is essential.
[1]. Joanna et al, Disentangling visual and written concepts in CLIP, CVPR'22 | Summary: The paper introduces Visualized In-Context Text Processing (VisInContext), a novel technique designed to enhance multi-modal learning models by efficiently expanding their in-context text length. This method transforms extensive text into visual tokens, substantially lowering GPU memory consumption and computational requirements during training and inference. Models utilizing VisInContext demonstrate superior performance on standard downstream benchmarks for in-context few-shot evaluation compared to conventional approaches. They also show improved document comprehension, particularly in document QA and sequential document retrieval tasks. An additional advantage of VisInContext is its compatibility with current context-extending techniques, allowing for potential combined applications.
Strengths: - The paper is well-written and easy to follow.
- The suggested method presents an innovative and intriguing concept: converting text into visual representations to decrease computational expenses.
Weaknesses: - The motivation of “text-only in-context few-shots experiment” is not clear. These experiments appear tailored specifically to validate the proposed method rather than addressing practical applications. In particular, the use of text-only versions of visual question answering (VQA) or image captioning tasks for in-context learning seems questionable. The relevance and applicability of such text-only adaptations of inherently visual tasks in this context require further justification.
- There are some unconvincing parts about token masking. In Line 87~88, the paper says the masking ratio of raw image tokens is 1.0. Then the model does not observe the raw image at all. Or, is the model initialized with OpenFlamingo Model, especially for cross-attention layer and resampler? If so, how does the token masking probability affect the model’s ability to learn text semantics from visual inputs? For my intuition, it seems there could be some trade-off of partly observing (masking) raw-image and learning text-image semantics from rendered text images at the same time.
- Minor :
- Reference, Figures needs compile check. There are some errors. (Line 159, 521, etc. )
- The formats of Table 1, 2 need to be more polished
Technical Quality: 2
Clarity: 3
Questions for Authors: - Table 1 exhibits an interesting trend where the baseline model occasionally shows improved performance with an increased number of shots in certain scenarios. This pattern might be attributed to specific characteristics of the datasets or peculiarities of the classification task at hand. What explanation do the authors propose for this counterintuitive observation?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The paper addresses thier limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. The motivation of “text-only in-context few-shots experiment” is not clear:**
The motivation behind the “text-only in-context few-shots experiment” aligns with the common practices in mainstream few-shot learning architectures like Flamingo, IDEFICS, and EMU2[1]. These models often include `text-only prompts for zero-shot evaluation`.
For example, in a typical zero-shot evaluation setting, the input to the model is structured as __<text 1><text 2><text 0><image 0>__, where "0-shot" actually involves using 2-shot text-only prompts (Lines 170-173). This design helps bootstrap the output format, enabling the model to `produce answers that follow the style of the prompts`. Follow this line, we test the longer prompt beyond 2-shot text data.
So the primary reasons for using text-only prompts in these experiments are:
1. **Testing Prompt Understanding**: This setup tests the ability of model to clearly understand the prompt and follow instructions.
2. **Practicality**: It provides a practical advantage when prompt images are difficult to obtain, ensuring the model can still perform well in the absence of visual input.
Motivated by these, we propose to evaluate text-only in-context few-shot and demonstrate that text rendered as images can be comparably effective as raw text.
**Q2. The details about token masking in Lines 87-88:**
There seems to be a misunderstanding. The vision feature input to the cross-attention model is a sum of raw image tokens and rendered text image tokens. As depicted in Figure 2.
In our implementation, the `raw image tokens are masked with a pre-defined probability`, which is 50% in our case (details are provided in Lines 107-110). This means the model only observes the raw image tokens half of the time. The reference to a 1.0 masking ratio in lines 87-88 indicates that we mask all the raw image tokens when the model is not supposed to see them at all.
However, the rendered text image tokens remain intact and still observed by the model.
The rationale behind this approach is that we experimentally found that models _tend to overly rely on raw pixel images, often ignoring the information from rendered text images_. By masking the raw image tokens, we encourage the model to pay attention to the rendered text images, thereby learning text-image semantics more effectively. For inference, we sum raw image token and rendered text image token (Line111-112).
**Q3. Some other minor typos:**
Thank you for pointing out these minor issues. We will fix the reference and figure compilation errors (Lines 159, 521) and polish the formats of Tables 1 and 2 to ensure they meet the required standards.
**Q4. Explanation for Baseline Model Occasionally Performing Better:**
Attention to Visual Tokens: For classification tasks, the model might prioritize distinguishing visual tokens over textual information. This could lead to better performance in scenarios where visual cues are more prominent.
` More shots do not always result in better performance.` As seen in related works like Flamingo (Table 1 in this work), it's not uncommon for more shots to occasionally result in worse performance. This variability can be due to the _random selection of example shots from the support sets_. If the selected examples are quite different from the query image-text pair, the model's performance might drop. Therefore, it is normal that the baseline outperforms our work occasionally and we `mainly focus on mean accuracy over all datasets.`
Some works, like RICES [2], analyze this phenomenon and focus on prompt ensembling by retrieving similar samples from the query set to highly improve multi-shot performance, which is not our focus.
[1]. Sun Q, Cui Y, Zhang X, et al. Generative multimodal models are in-context learners[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 14398-14409.
[2]. Yang Z, Gan Z, Wang J, et al. An empirical study of GPT-3 for few-shot knowledge-based VQA. Proceedings of the AAAI Conference on Artificial Intelligence, 2022, 36(3): 3081-3089.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Thanks for the response. I carefully read through the authors' responses and the discussions among the other reviewers. While most of my initial concerns have been addressed, I still have an unclear part, which is related to the token masking. In lines 87-88, the paper states that "which ensures that the model won’t simply be ignoring the text images during training." However, according to the rebuttal, "a 1.0 masking ratio" is needed for the situation when the model is not supposed to see the raw image tokens, such as text-only in-context few-shot, or zero-shot evaluation. The word "training" in line 88 confuses me. Could you elaborate on this more, please?
---
Reply to Comment 1.1.1:
Comment: Thanks for your timely response! The term "Training" in lines 87-88 refers to the **application of token masking exclusively during the pre-training stage**.
During the pre-training stage, token masking is applied to ensure the LLM sees only the raw image 50% of the time. For the remaining 50%, the LLM processes both the rendered text image and the raw image together.
This approach _prevents the model from adopting a trivial solution that completely disregards the rendered text image_.
During `downstream evaluation tasks, no tokens are masked`. For instance, in the in-context text-only few-shot setting in Table 2, where rendered text images are available, the raw image token remains unmasked.
The vision feature input to the cross-attention model in this scenario is the sum of the raw image token and the rendered text image token.
For zero-shot evaluation in Table 1, where no rendered text image is present, only the grey-shaded components in Fig. 2 are retained.
I hope this clears up any confusion. Please let me know if further clarification is needed.
---
Rebuttal 2:
Comment: The masking ratio of 1.0 mentioned in Lines 87-88 `differs from` the predefined probability of 50% referenced in Lines 107-110.
The predefined probability of 50% in Lines 107-110 indicates that, during pre-training, there is a **50% chance that the raw image tokens will be masked**.
The masking ratio of 1.0 in Lines 87-88 refers to the `specific implementation of masking`, where all raw image tokens are masked.
The purpose of Lines 87-88 is to address the issue where combining tokens from raw images and text images directly caused the network to overlook the text-image input (as mentioned in Lines 105-107).
---
Rebuttal Comment 2.1:
Title: Final rating
Comment: Ok, I got the context. Nevertheless, I think the manuscript should be clearer. Overall, I still think the idea of using rendered text is interesting and underexplored, despite the debate on the necessity or complexity of using rendered texts. Therefore, I will keep my initial rating. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Disentangling Linear Quadratic Control with Untrusted ML Predictions | Accept (poster) | Summary: This paper presents a novel policy, DISC, designed to manage uncertain perturbations in dynamical systems by learning a confidence parameter online. The focus is on integrating predictions from machine learning (ML) tools, which are often unreliable, into linear quadratic control (LQC) frameworks. The key innovation of DISC is its ability to harness accurate predictions when available and mitigate the impact of erroneous forecasts, achieving a balance between consistency and robustness.
The paper begins by discussing the challenges of using ML predictions in control systems, particularly the issue of reliability due to factors like high model variability and out-of-distribution generalization issues. The authors propose a policy, λ-CON, which extends existing methods by adapting a vectorized confidence parameter for each latent variable. However, λ-CON has limitations in achieving optimal consistency and robustness. DISC addresses these limitations by dynamically learning the confidence parameter through online learning, ensuring better performance regardless of prediction accuracy.
Theoretical results provide competitive ratio bounds for DISC under both linear and general mixing functions, demonstrating its robustness and consistency. The paper also includes experiments in two real-world scenarios: drone navigation with mixed disturbances and voltage control in power grids. These experiments validate DISC's effectiveness, showing that it outperforms baseline policies in terms of cost and adaptability to rapid changes in the environment.
Strengths: - The introduction of a dynamic confidence parameter that adapts online is an important advancement in integrating ML predictions into control systems.
- The paper provides theoretical guarantees for DISC, offering competitive ratio bounds that highlight its robustness and consistency.
- The experimental validation in real-world scenarios, such as drone navigation and voltage control, demonstrates the practical relevance and effectiveness of DISC.
- DISC's ability to adapt to varying levels of prediction accuracy and environmental changes is a valuable feature for real-world applications.
Weaknesses: - The theoretical analysis relies on several assumptions, such as the continuity and invertibility of the mixing function, which may not hold in all real-world scenarios.
- While the experiments support the theoretical findings, the datasets used might not fully capture the diversity and complexity of real-world applications.
- The method may be complex to implement in practice, especially in real-time systems where computational resources are limited.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How does DISC scale with the increase in the number of latent variables and the complexity of the system? Are there any practical limits to the number of latent components DISC can handle efficiently?
- What are the computational requirements for implementing DISC in real-time systems? Can the authors provide insights or benchmarks on the computational overhead introduced by the online learning of the confidence parameter?
- Have the authors considered extending DISC to handle nonlinear dynamical systems? If so, what challenges do they anticipate, and are there any preliminary results or insights they can share?
- Can the authors provide a more extensive comparison between DISC and other existing robust control methods? How does DISC's performance compare in terms of both consistency and robustness?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - The study mainly focuses on specific applications like drone navigation and voltage control, and its applicability to other fields remains uncertain.
- The competitive ratio bounds depend on prediction errors, which might limit the policy's effectiveness in scenarios with high prediction inaccuracies.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for all the comments and we appreciate your positive feedback on our work!
Regarding the issues pointed out in the weakness part, here are our explanations and future plans:
***Assumptions:*** We acknowledge the limitations of assuming the continuity and bijectivity of the mixing function $f$. It is worth highlighting that those assumptions are standard in nonlinear independent component analysis (ICA) models [Hyvarinen 2016, Khemakhem 2020, Yang 2022, Zheng 2022]. These advances leverage additional assumptions on the latent variables in $s_t$ and the mixing function $f$ to make the model identifiable. Below we summarize those common assumptions (besides the standard assumptions of mutual independence between latent variables and at most one latent variable is Gaussian) made in those models as examples. A similar table can be found in our Appendix B.1.
*Standard Assumptions on $f$ and $s$ for a Subset of Nonlinear ICA Models*
| **Nonlinear ICA Models** | **Key Assumptions on $f$** |
|--------------------------------------------|-------------------------------------------------------------------------------------------|
| Identifiable VAE [Khemakhem 2020] | Mixing function $f$ is bijective and smooth |
| Contrastive learning [Hyvarinen 2016] | Mixing function $f$ is bijective and smooth |
| Structural sparsity model [Zheng 2022] | Support of the Jacobian $\mathsf{J}_f(s)$ of $f$ is sparse |
| Volume-preserving model [Yang 2022] | Mixing function $f$ is bijective and $\|\mathrm{det}\mathsf{J}\_f(s)\|=1$. |
| **Nonlinear ICA Models** | **Key Assumptions on $s$** |
|--------------------------------------------|-------------------------------------------------------------------------------------------|
| Identifiable VAE [Khemakhem 2020] | $(s_t(1),\ldots,s_t(k))$ are conditionally independent given a variable $u$ |
| Contrastive learning [Hyvarinen 2016] | $(s_t:t\in [T])$ is non-stationary or has temporal dependencies |
***Datasets:*** Thank you for the comment. In future work, we plan to evaluate DISC on a broader range of datasets and more diverse scenarios.
***Implementation Complexity:*** We discuss the practicality of DISC in the following 1-1 responses to the questions raised.
***Question 1.*** The core step in DISC is the computation of the action through the following $\lambda$-CON policy:
$$u_t =-Kx_t- Y\sum_{\tau=t}^{\overline{t}}\left(F^\top\right)^{\tau-t} P f\left(\lambda\circ\widetilde{s}_{\tau|t};\widetilde{\theta}_t\right),$$
where $Y:= (R+B^{\top}PB)^{-1}B^{\top}$; $\\widetilde{s}\_{\\tau|t}$ and $\\widetilde{\\theta}\_t$ denote the predicted latent variable and mixing parameter at time $t$. When the number of latent variables $k$ increases, the complexity comes from the computation of the Hadamard product $\\lambda\\circ\\widetilde{s}\_{\\tau|t}$ ($k$ operations). The learning of $\lambda\in [0,1]^k$ involves solving an online optimization whose variable is in $[0,1]^k$. In practice, the state dimension $n$ and action dimension $m$ are often much larger than $k$ and the number of latent variables is not a bottleneck in all these key steps. For example, in our drone navigation example, external disturbances are caused by two latent variables, i.e., the wind speed and rain intensity. Similarly, in the second voltage control example, $k=3$ corresponding to PV integration, wind generation, and residential consumption.
***Question 2.*** Due to space limitations, we provide a detailed discussion in the following official comment.
***Question 3.*** This is a great question. Dealing with nonlinear dynamics is challenging, since there may not exist an explicit expression of the total regret $J(\pi(\lambda)) - J^{\star} = \sum_{\ell=0}^{T-1}\psi_{\ell,T}^{\top}(\lambda) H \psi_{\ell,T}(\lambda)$ as a function of some trust parameter $\lambda$. It is therefore nontrivial if optimizing $\lambda$ online is still possible. We have been thinking about this issue by constructing surrogate functions and would like share more insights if you are interested.
***Question 4.*** We have compared DISC with vanilla MPC and LQR in our experiments. We will add more experiments that compare DISC with robust control methods such as the $\mathcal{H}_{\\infty}$ controller to compare the consistency and robustness.
Thank you once again for your constructive feedback. We will incorporate all your suggestions to enhance the clarity and applicability of our work. Due to space limitations, we defer our responses to the remaining issues in the official comment.
```REFERENCES```
[Hyvarinen 2016] Aapo Hyvarinen, and Hiroshi Morioka. "Unsupervised feature extraction by time-contrastive learning and nonlinear ica." Advances in neural information processing systems 29 (2016).
[Khemakhem 2020] Ilyes Khemakhem, Diederik Kingma, Ricardo Monti, and Aapo Hyvarinen. "Variational autoencoders and nonlinear ica: A unifying framework." In International conference on artificial intelligence and statistics, pp. 2207-2217. PMLR, 2020.
[Yang 2022] Xiaojiang Yang, Yi Wang, Jiacheng Sun, Xing Zhang, Shifeng Zhang, Zhenguo Li, and Junchi Yan. "Nonlinear ICA using volume-preserving transformations." In International Conference on Learning Representations. 2022.
[Zheng 2022] Yujia Zheng, Ignavier Ng, and Kun Zhang. "On the identifiability of nonlinear ICA: Sparsity and beyond." Advances in neural information processing systems 35 (2022): 16411-16422.
---
Rebuttal 2:
Title: Additional Responses to Limitations
Comment: Besides the responses in the formal rebuttal (*will be revealed after the rebuttal phase*), due to space limitations, in this official comment we include our additional responses to the limitations mentioned at the end of the review:
***Limitation 1:*** While our study focuses on applications like drone navigation and voltage control, the underlying principles and methods of DISC are designed to be broadly applicable. We acknowledge that real-world deployment may vary across different domains, and future work will aim to test DISC in a wider range of applications to better understand its generalizability. We are also exploring how the methodology can be adapted to other fields. In future work, we plan to evaluate DISC on a broader range of datasets and more diverse scenarios.
***Limitation 2:*** We'd like to highlight that the dependency on prediction error in the derived competitive ratio bound offers the first bound of this nature, grounded in a term as a special function of the prediction error, without restrictive assumptions on the errors $(\overline{\varepsilon}(1),\ldots,\overline{\varepsilon}(k))$.
For reference, the competitive ratio bound is outlined informally as follows:
$$\mathbb{E}\left[\mathsf{CR}(\text{DISC})\right]\leq 1+o(1)+ O\left(\rho^{2w}\right) + \underbrace{O\left(\sum_{i=1}^{k}\frac{\overline{\varepsilon}(i)}{\Omega(T/w)+\overline{\varepsilon}(i)}\right)}_{\text{\textit{Best-of-both-worlds utilization}}},$$
where the $o(1)$ term hides quantities that vanish when the total number of steps $T$ increases; $k$ is the number of latent variables generating the perturbations; $\rho\in (0,1)$; $w$ denotes the prediction window size and each $\overline{\varepsilon}(i)$ (for $i=1,\ldots,k$) denotes the prediction error corresponding to each latent component.
The last term in this result *depends on the prediction error*, but it highlights the desired performance guarantee:
**Consistency:** When a component-wise prediction error $\overline{\varepsilon}(i)$ is small, the individual term ${\overline{\varepsilon}(i)}/\left({\Omega(T/w)+\overline{\varepsilon}(i)}\right)$ will be negligible. Precisely, if $\overline{\varepsilon}(i)=0$, the resulted $i$-term becomes zero as well. This actually yields a smaller competitive ratio compared to traditional robust control policies that do not utilize disentangled predictions.
**Robustness:** Otherwise, it always holds that ${\overline{\varepsilon}(i)}/\left({\Omega(T/w)+\overline{\varepsilon}(i)}\right)\leq 1$, regardless of how high the prediction error becomes. The robustness can be inferred from this dependency on the prediction error such that even if $\overline{\varepsilon}(i)$ becomes large, the last term is still bounded since $\overline{\varepsilon}(i)$ appears in both the numerator and the denominator. It's worth mentioning that such a bounded competitive ratio is not achievable by traditional control policies such as MPC.
---
Rebuttal Comment 2.1:
Comment: Thanks for the response. I will keep my score.
---
Reply to Comment 2.1.1:
Comment: Thank you so much for the comment.
Due to space limitations, we have used the official comment section above to provide additional responses to the limitations mentioned at the end of the review.
Further responses to the major critical questions, including those regarding assumptions and implementation complexity, are included in the formal rebuttal and **will be revealed on August 6, 11:59 PM AoE, after the rebuttal period**. We hope you find our main responses satisfactory too.
---
Rebuttal 3:
Title: Additional Responses to Question 2
Comment: ***Question 2. Implementation complexity:***
We thank Reviewer 6dfr for this question and we argue that the additional computational overhead of DISC is minimal.
Like MPC, DISC requires the prediction of future disturbances at each time step for decision-making. Given this prediction, DISC's overhead primarily stems from two sources: the disentanglement algorithm and the online learning procedure for the confidence parameter $\lambda$. The latter mainly involves computing the gradient of $\zeta^{\top} H \zeta$ w.r.t. $\lambda$.
ICA theory requires that the latent dimension be smaller than the observation dimension for identification. The computational cost of disentanglement varies by algorithm but is generally efficient. For instance, in the setting of our experiments, the FastICA algorithm completes in under a second.
For the online learning procedure, when under linear mixing, the gradient computation involves several matrix multiplications with the dimension of the matrix dimensions upper bound by the observation dimension. For nonlinear mixing, one can employ a neural network to represent the mixing function and use inference and the Jacobian computation of this neural network to calculate the gradient. Both can be efficiently completed within seconds for observation dimensions typical in control tasks. In our experiments with linear mixing, the online learning of the $\lambda$ procedure takes less than a second.
Overall, DISC's computational requirements remain very manageable for most practical applications. | Summary: The submission considers LQC with perturbations. The considered problem is interesting with potential applications on LQC with noisy/unreliable ML predictions.
The submission improves the existing method in Robustness and consistency in linear quadratic control with untrusted predictions, where there is a constant parameter determining the confidence of predictions. The submission is novel by learning the parameter in an online manner.
Both theoretical and empirical results are provided supporting the proposed method.
Strengths: The proposed method is novel by learning the confidence parameter in an online manner. Such a method can have many applications in control with noisy predictions.
The proposed method is analyzed both theoretically and empirically. Improved results are achieved.
Weaknesses: The writing of the submission is really hard to follow.
First of all, the submission mentions some concepts that are abstract and unprecise, like "best-of-both-worlds" and "consistency and robustness".
Further, the introduction has gets into technical details without proper definitions and explanations. Examples include "(1 + o(1))-consistent", "ω(1)-robust", CR(DISC), and "Best-of-both-worlds utilization".
Finally, the update rule of $\lambda_t$ in section 3.2 is missing. It is not self-contained to just refer to some other papers for such an important piece. Without a clear and explicit definition of the update rule of $\lambda_t$, it is very hard to reproduce or evaluate the proposed method.
Technical Quality: 3
Clarity: 2
Questions for Authors: The claim on line 157 is confusing. Why when $\lambda=0$, the $f$ term will disappear? This statement requires further assumptions on $f$ which are currently missing.
What is the update rule of $\lambda_t$?
Also, I am wondering whether it might be a easy/doable direction to extend the proposed method to the case with a dynamics like $x_{t+1} = Ax_t+ B\mu_t f(s_t)$, where the ML predictions also affect the effect of the actions.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We appreciate your comments on the strengths and acknowledge the concerns regarding the clarity of our writing and presentation. Please find our clarification below.
1. ```Clarification of Basic Concepts in Abstract``` The detailed meanings of "*best-of-both-worlds*" and "*consistency and robustness*" in our abstract are explained below. FYI, the original sentence reads
"*Our results highlight a first-of-its-kind “best-of-both-worlds” integration of machine-learned predictions, thus lead to a near-optimal consistency and robustness tradeoff*"
In our study, the predicted latent variable time series $(\widetilde{s}_{\tau|t} : t \leq \tau \leq \overline{t})$ and the mixing parameters $(\widetilde{\theta}_t : t \in [T])$ can vary in accuracy, potentially being either precise or erroneous.
The term “**best-of-both-worlds**” highlights that our method's ability to ensure reliable performance (a feature that we refer to as **robustness**) for worst-case scenarios. Mathematically, this corresponds to a bounded competitive ratio even if the prediction errors are significant. Conversely, if the predictions are near-optimal, our method demonstrates adaptability, and behaves similarly to an optimal controller (a.k.a **consistency**). This dual capability is encapsulated in our main result, which establishes a bound on the (expected) competitive ratio, illustrating our method's effectiveness across varying levels of prediction accuracy. Note that the control policy is not given the prediction accuracy beforehand.
For further context, this can be seen in our main result that bounds the (expected) competitive ratio:
$$\mathbb{E}\left[\mathsf{CR}(\mathsf{DISC})\right]\leq 1+o(1)+ O\left(\rho^{2w}\right) + \underbrace{O\left(\sum_{i=1}^{k}\frac{{\color{blue}\overline{\varepsilon}(i)}}{\Omega(T/w)+{\color{blue}\overline{\varepsilon}(i)}}\right)}_{\text{\textit{Best-of-both-worlds utilization}}},$$
where the $o(1)$ term hides quantities that vanish when the total number of steps $T$ increases; $k$ is the number of latent variables generating the perturbations; $\rho\in (0,1)$; $w$ denotes the prediction window size and each $\overline{\varepsilon}(i)$ (for $i=1,\ldots,k$) denotes the prediction error corresponding to each latent component. The expectation is over the randomness of the control policy if $f$ is a general mixing function (note that the control policy becomes deterministic for the linear mixing case).
The meaning of **Best-of-both-worlds utilization** in the last term reflects the following guarantees on robustness and consistency:
**Robustness** The robustness can be inferred from the bound above that even if $\overline{\varepsilon}(i)$ becomes large, the last term is still bounded since $\overline{\varepsilon}(i)$ appears in both the numerator and the denominator. It's worth mentioning that such a bounded competitive ratio is not achievable by traditional control policies such as MPC.
**Consistency** On the other hand, if $\overline{\varepsilon}(i)=0$, the resulted $i$-term becomes zero as well. This yields a smaller competitive ratio compared to traditional robust control policies that do not utilize disentangled predictions.
We hope the above explanations make sense. We will revise our abstract and introduction based on the concern and please let us know if there are further questions. Thank you again for the comment.
2. ```Technical Details```
The formal definitions of **$(1+o(1))$-consistent** and **$\omega(1)$-consistent** are provided in Definition 2.2 (Section 2). Let $\mathsf{CR}(\pi;\varepsilon)$ denote the competitive ratio of a control policy $\pi$ with a fixed prediction error $\varepsilon$. Specifically, A policy $\pi$ is *$\gamma$-consistent* if its competitive ratio satisfies $\mathsf{CR}(\pi;\varepsilon)\leq \gamma$ for $\varepsilon=0$ and *$\kappa$-robust* if $\mathsf{CR}(\pi;\varepsilon)\leq \kappa$ for all $\varepsilon$.
We will add a pointer to clarify in our introduction section. Thank you for pointing out this issue.
3. ```Update Rule```
In fact, the update rules of $\lambda_t$ are provided formally in later sections for both the linear mixing and general mixing cases correspondingly. Section 3.2 briefly overviews our control policy, without specifying the ONLINE-PROCEDURE. The main reason is because we use different update rules for linear mixing and general mixing settings, so we chose to specify them in later sections, together with the theoretical results.
**Update Rule for Linear Mixing** In Section 4.2, Eq. (11), we have detailed the FTRL procedure for learning $\lambda_t$ in our context:
$$\lambda\_t \in \text{argmin}\_{\lambda\in\mathcal{I}} \\left(\sum_{\ell=0}^{t-1}\nabla_{\lambda}^\top\left(\zeta_\ell(\lambda)^\top H \zeta_\ell(\lambda)
\\right)\lambda + \frac{1}{\beta}\|\lambda-\lambda_0\|^2\right)$$
for some $\beta>0$ that can be optimized.
**Update Rule for General Mixing** Similarly, in Section 4.2, Eq. (12), we have detailed the FTPL procedure for learning $\lambda_t$.
We will revise Section 3.2 to incorporate the comment and improve our presentation.
***Question 1.*** *The claim on line 157 ...*
Thank you so much for the comment. Our statement there is not accurate with the current assumption on $f$. We will revise the context correspondingly. Note that our proof for the general mixing case only requires the Lipschitz continuity and bijectivity of $f$.
***Question 2.*** *Extend ... the ML predictions also affect the actions*.
In fact, in our $\lambda$-CON policy, $u_t$ indeed depends on the predicted mixing parameters and latent variables, i.e.,
$u_t =-Kx_t- Y\sum_{\tau=t}^{\overline{t}}\left(F^\top\right)^{\tau-t} P f\left(\lambda\circ\widetilde{s}_{\tau|t};\widetilde{\theta}_t\right)$
as specified in Eq. (5), Line 153. Going beyond our current setting, an interesting future direction is to consider the disentanglement of latent variables when they depend on current system states and actions.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your clarification. I have one follow-up question, for Update Rule for Linear Mixing, how does the algorithm solve the argmin for " FTRL procedure for learning $\lambda_t$ "? Are we using gradient-based method? Or can we get a closed-form solution? Such information is crucial for the implementation of the proposed method.
Thank you.
---
Rebuttal 2:
Title: Thank You for Your Follow-up Question
Comment: Thank you for the great question. In the implementation of the algorithm, we solve the follow-the-regularized-leader (FTRL) optimization in Eq. (12) by the following online mirror descent (OMD) for the linear mixing setting:
$${\tau_{t} = \tau_{t-1}-\frac{\beta}{2}\nabla_\lambda\left(\zeta_{t}^\top H \zeta_{t}\right), \ \ \lambda_t = \mathsf{Proj}_{\mathcal{I}}(\tau_t)},$$
where $\\tau\_t$ represents our internal parameter updates, $\\nabla\_\\lambda$ denotes the gradient with respect to $\\lambda$, and $\\mathsf{Proj}\_{\\mathcal{I}}$ is the projection onto the feasible set $\\mathcal{I}$ of $\lambda$.
The OMD above is shown in [Hazan 2010] to be equivalent to the FTRL procedure in Eq. (12).
More precisely, the gradient of the cost gap at time $t\in [T]$ is given by
$$\\nabla\_{\\lambda}\\left(\zeta\_t^\\top H \\zeta\_t \\right) =\\mathsf{J}^{\\top}(\\zeta\_t)\\left(H+ H^\\top\\right)\\zeta\_t = 2 \\mathsf{J}^\\top\\left(\\zeta\_t\\right)H\\sum\_{\\tau=\\underline{t}}^{t}\\left(F^\top\\right)^{t-\\tau} P \\left(\\theta s\_t - \\widetilde{\\theta}\_\\tau\\left(\lambda\\circ\\widetilde{s}\_{t|\\tau}\\right)\\right),$$
where we have used the fact that $H\coloneqq B(R+B^\top P B)^{-1} B^\top$ is symmetric. Moreover,
$$\\mathsf{J}\\left(\\zeta\_t\\right):=
-\\sum\_{\\tau=\\underline{t}}^{t}\\left(\\left(F^\top\\right)^{t-\\tau} P\\widetilde{\\theta}\_\\tau\\begin{bmatrix}
\\widetilde{s}\_{t|\\tau}(1) & &\\\\
& \\ddots &\\\\
& & \\widetilde{s}\_{t|\\tau}(k)
\\end{bmatrix}\\right)$$ denotes an $n\times k$ Jacobian matrix $\\mathsf{J}\\left(\\zeta\_t\\right)$ of $\\zeta\_t$ with respect to the confidence parameter $\\lambda$ whose $(i,j)$-th entry is $\\frac{\\partial \\zeta\_t^{(i)} }{\\partial \\lambda^{(j)}}$.
Details we provided here regarding the implementation can also be found in **Lines 250-257 (Section 5. Experimental Setup)** and **Lines 793-794 (Section E in the Appendix)**. As suggested, we will revise our manuscript to further clarify the implementation. We appreciate your time and the critical role you play in the review process, and we remain fully open to any further questions or comments you might have during the discussion period.
[Hazan 2010] Elad Hazan, and Satyen Kale. "Extracting certainty from uncertainty: Regret bounded by variation in costs." Machine learning 80 (2010): 165-188.
---
Rebuttal 3:
Comment: Dear Reviewer 4nx1,
We hope our updated clarification of the Update Rule for Linear Mixing is helpful. Please don't hesitate to reach out if you have any further questions or need additional information. Thank you. | Summary: The paper considers the problem of LQR where the added perturbations may not be iid Gaussian noise but depend on some latent variables (which are themselves predicted using an ML model). In particular, the authors consider the dynamics given by
x_{t+1} = A x_{t} + B u_t + f(s_t, \theta)
where s_t and \theta are themselves predicted (up to some future window of length w) by an ML procedure. The goal is to provide an algorithm that (a) enjoys a good competitive ratio w.r.t. the optimal policy when the predictions are correct, and (b) is robust to the prediction of the latent variables. The learner knows A, B and f.
They provide theoretical results when the perturbations could be linear functions of the latent variables, as well as general functions, and provide experiments on drone navigation and power grid. The key idea in the algorithm is to first predict a \lambda vector that decides how much to trust each latent vector, and then to run a \lambda-confident policy. The authors use FTRL style online algorithm to predict \lambda.
Strengths: The paper considers an important problem of using ML predictions in control and being robust to them. As far as I know, bounding competitive ratio for both consistency and robustness simultaneously has not been considered before in the control literature.
The lower bound in Theorem 4.1 that shows that a fixed \lambda does not suffice is interesting. In hindsight, it makes sense since the ML predictions can change arbitrarily (but it is useful to present it).
Sufficient experimental evaluation.
Weaknesses: The techniques are not fundamentally different from prior works, however, the fact that an online predictor of \lambda parameter followed by plug-in control policy in eqn (5) suffices for robustness is interesting. I still support accepting the paper.
Minor typos:
Like 311 -- "do not reply" -> "do not imply"
Line 21 -- "adaptively" -> "can adapt"
Technical Quality: 3
Clarity: 3
Questions for Authors: Questions:
1. Is there a straightforward way to incorporate unknown A and B matrices?
2. Is there a way to use a fixed \lambda but get o(T)-robustness while being O(1)-consistent, or vice-versa in Theorem 4.1?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the novel aspects and strengths of our work, especially the dual focus on consistency and robustness in the use of machine learning predictions within control systems. We also appreciate your acknowledgment of the importance of the problem we are addressing and the novel contribution of Theorem 4.1, which highlights the limitations of the fixed
$\lambda$.
Regarding the concern about the novelty of the techniques used in our research:
***1. Technical Novelty*** While it is true that the foundational benchmarks such as competitive ratios [Goel 2022a, Sabag 2022], and the basic idea of learning trust parameters online [Li 2022] we employed are established within the field, our work is still technically novel in the following aspects.
***a. $k$ latent variables:*** We study a control problem
$$x_{t+1} = Ax_{t}+ B u_t + f(s_t;\theta), \quad t\in [T]$$
with perturbations depend on $k$ latent variables. Bounding the competitive ratio of the proposed control policy requires a significantly different technique (see Appendix E, the competitive analysis in Step 4 of the proof of Theorem 4.2) compared to those appeared in the previous literature. For example, special treatments of the total regret $J(\pi(\lambda)) - J^{\star} = \sum_{\ell=0}^{T-1}\psi_{\ell,T}^{\top}(\lambda) H \psi_{\ell,T}(\lambda)$ are necessary to decompose it as $O\left( \sum_{i=1}^{k}\left(\frac{w\overline{\varepsilon}(i) }{J^{\star}}\left(\lambda(i)\right)^2+\frac{\left(1-\lambda(i)\right)^2}{C_{0}\left\|f^{-1}\right\|^2} \right)\right)$ (see Eq. (43)) in the proof of Lemma 6, Appendix H.
***b. Nonlinear $f$:*** Compared with the linear quadratic control models in [Goel 2022b] and [Li 2022], nonlinear perturbations lead to further technical challenges on learning $\lambda$ since they form a Hadamard product inside the mixing function as in the $\lambda$-CON policy (Eq. (5)):
$$u_t =-Kx_t- Y\sum_{\tau=t}^{\overline{t}}\left(F^\top\right)^{\tau-t} P f\left(\lambda\circ\widetilde{s}_{\tau|t};\widetilde{\theta}_t\right),$$
where $Y:= (R+B^{\top}PB)^{-1}B^{\top}$.
***c. Quadratic form transformation lemma:*** More importantly, we do not assume *bounded variations* of system perturbations and errors, which are often required in existing self-tuning policy [Li 2022], and robust control literature, such as robust model predictive control MPC [Berberich 2020] to ensure worst-case guarantees. For instance, the competitive ratio bound in [Li 2022] depends on
a term $O\left(\frac{1}{J^{\star}}\left(\mu_{\mathrm{VAR}}(\mathbf{w})+\mu_{\mathrm{VAR}}(\widehat{\mathbf{w}})\right)^2\right)$
where the self-variation $\mu_{\operatorname{VAR}}(\mathrm{y})$ of a sequence $\mathrm{y}:=\left(y_0, \ldots, y_{T-1}\right)$ is defined as
$$
\mu\_{\mathrm{VAR}}(\mathrm{y}):=\sum\_{s=1}^{T-1} \max _{\tau=0, \ldots, s-1}\left\|y\_\tau-y\_{\tau+T-s}\right\|.
$$ The main reason of having such a term is because the self-tuning policy there basically updates $\lambda$ through a FTL-type online optimization.
In contrast, proving our main result above is nontrivial due to the fact that despite it is known that an input-disturbed linear system can be reduced to an online convex optimization (OCO) with structured memory [Shi 2020], the connection between the problem with $\lambda$-CON and a memoryless online optimization is not previously discovered. In the *quadratic form transformation lemma* (Lemma 1), we provide a result that decouples the loss terms in the total regret and future perturbations, thereby reducing the problem of choosing $\lambda_t$ to an online optimization instance. Then we use a two-stage analysis that combines a dynamic regret bound for our control policy and static regret bounds induced by online learning algorithms to derive the main result.
Furthermore, the perturbations are assumed to be bounded in [Li 2022]. Our technical achievement lies in offering the first bound of this nature, grounded in a term as a function of the prediction error, without restrictive assumptions on the errors $(\overline{\varepsilon}(1),\ldots,\overline{\varepsilon}(k))$.
Besides, the novelty of our work lies not only in the techniques themselves but also in how they are integrated together and applied to address a problem that has not been previously tackled in the learning and control community, as demonstrated by our practical examples.
We will add remarks to address these issues in the revised manuscript.
***2. Typos*** Thank you so much for catching them. We have changed "reply" to "rely" and "adaptively" to "can adapt" in the revised manuscript.
***3. Questions***
Thank you for the great questions. Due to space limitations, we discuss them in the following comment.
```REFERENCES```
[Berberich 2020] Julian Berberich, Johannes Köhler, Matthias A. Müller, and Frank Allgöwer. "Data-driven model predictive control with stability and robustness guarantees." IEEE Transactions on Automatic Control 66, no. 4 (2020): 1702-1717.
[Shi 2020] Guanya Shi, Yiheng Lin, Soon-Jo Chung, Yisong Yue, and Adam Wierman. "Online optimization with memory and competitive control." Advances in Neural Information Processing Systems 33 (2020): 20636-20647.
[Goel 2022a] Gautam Goel, and Babak Hassibi. "Competitive control." IEEE Transactions on Automatic Control 68.9 (2022): 5162-5173.
[Goel 2022b] Gautam Goel, and Babak Hassibi. "The power of linear controllers in LQR control." 2022 IEEE 61st Conference on Decision and Control (CDC). IEEE, 2022.
[Li 2022] Tongxin Li, Ruixiao Yang, Guannan Qu, Guanya Shi, Chenkai Yu, Adam Wierman, and Steven Low. "Robustness and Consistency in Linear Quadratic Control with Untrusted Predictions." ACM SIGMETRICS Performance Evaluation Review 50, no. 1 (2022): 107-108.
[Sabag 2022] Oron Sabag, Sahin Lale, and Babak Hassibi. "Competitive-ratio and regret-optimal control with general weights." 2022 IEEE 61st Conference on Decision and Control (CDC). IEEE, 2022.
---
Rebuttal 2:
Title: Additional Responses to Question 1 and 2
Comment: Further responses to the major concerns, including those regarding technical novelty are included in the formal rebuttal and **will be revealed on August 6, 11:59 PM AoE, after the rebuttal period**.
**Question 1: Learning $A$ and $B$**
Yes, in our work, we focus on learning the latent variable time series and the mixing parameters, and designing a corresponding control policy. For the ease of presentation, we suppose $A$ and $B$ are given. Standard system identification methods can be used to estimate $A$ and $B$ offline. If $A$ and $B$ are unknown before applying the control policy, a new online control policy needs to be designed and this is nontrivial for the dynamics considered in this work. For instance, it is not straightforward if learning $A$ and $B$ online would lead to an additional logarithmic regret as in [Agarwal 2019], while simultaneously guaranteeing the near-optimal consistency and robustness tradeoff presented in our work, as well as the competitive ratio guarantee. This suggests an interesting future direction, and we will add a discussion in the revised manuscript.
**Question 2: Stronger statement in Theorem 4.1**
This is a great question. In fact, the statement in Theorem 4.1 can be tightened to the following:
"*There exists a constant $C>0$ such that if the $\lambda$-confident policy $\lambda$-CON is $(1+C)$-consistent, then it cannot be $o(T)$-robust, even if the mixing parameter estimate is perfect, i.e., $\overline{\eta}=0$.*"
In the proof of Theorem 4.1, Appendix G, assuming $\lambda$-CON is $(1+o(1))$-consistent implies $\lambda$ has to be $\mathbf{1}_k$. Setting $\mu=\log T$ directly implies this stronger claim. Indeed, the adversary can choose any $\mu>0$ to make the total policy cost as large as possible if $\lambda$ is fixed. Moreover, the assumption on the consistency can also be tightened from $(1+o(1))$ to $(1+O(1))$, i.e., $\lambda$-CON is $(1+C)$-consistent for some $C$ satisfying
$$C\\leq \\frac{1}{J^{\star}} \\lambda\_{\min} \\left((R+B^\\top P B)^{-1}\\right) \\sum\_{t=0}^{T-1} \\left\\|B^{\\top} \\sum\_{\\tau=\\ell}^{\\overline{T}}\\left(F^\\top\\right)^{\tau-\\ell} P \\theta\_{\\tau} \\left( \\left(\\mathbf{\frac{1}{2}}_{k}\\right)\\circ s\_{\\tau|\\ell} \\right)\\right\\|^2.$$
We can construct matrices $A,B,Q,R$ and latent variables to make $C$ a positive constant. Then, applying the inequality (33) in Appendix G, we know $\lambda$ can not be the all-zero vector. The same argument in Lines 818-821 implies that $\lambda$-CON cannot be $o(T)$-robust.
In our original presentation, we used $\omega(1)$ to indicate that learning $\lambda$ yields a fundamentally better tradeoff. We thank the reviewer once again for the constructive feedback, and we will add additional remarks in our manuscript to further clarify the points discussed above.
```REFERENCES```
[Agarwal 2019] Naman Agarwal, Elad Hazan, and Karan Singh. "Logarithmic regret for online control." Advances in Neural Information Processing Systems 32 (2019). | Summary: The paper investigates the setting of linear quadratic control with latent perturbations. In particular, the state evolution does not only depend on the current state and the control input, but also on various latent variables that are mixed through a linear or nonlinear mixing function of unknown parameterization. The authors attempt to address this problem by considering a $\lambda$-confident policy that balances consistency and robustness. However, they shows that a fixed confidence value cannot achieve a good consistency and robustness tradeoff. Motivated by this, the authors suggest to use a disentangled confidence policy, where the algorithm adaptively learns the trust parameter in an online manner. The authors are able to prove that their scheme achieves near-optimal competitive ratio bounds for both linear and general mixing cases (under reasonable assumptions). The authors confirm the significance and superior performance of adaptive learning for the trust parameter using interesting use cases inspired by real-world applications.
Strengths: - The problem statement is well-motivated and nice practical examples are provided.
- The paper makes several interesting contributions. I particularly liked the fundamental gap in Theorem 4.1, which motivates that we need to learn the confidence parameter $\lambda$ in an online manner. Furthermore, the authors provide competitive ratio results, not only for the linear case, but also for the general mixing case. The assumptions (e.g., Lipschitz continuity and bijectivity of $f$ are not so weak, but also not unreasonable).
- On the theory side, the paper is quite rich and the proofs seem to work in general (even though I do have some concerns, as I explain in the following sections).
- The experimental evaluation confirms the fact that online learning of the $\lambda$-confidence parameter is beneficial in two interesting settings. Results look good overall.
- The authors discuss several aspects of their work in the Appendix and most aspects are explained quite well.
Weaknesses: - The paper contains several errors in various formulas. In the Questions section below, I have mentioned several problems I was able to identify, but there could be more. These errors do not seem to be critical, but the manuscript still needs a very careful proofreading to fix them.
- I was not clear about certain theoretical claims, like the $l_4$ norm or certain assumptions. Please also see Questions section below. I believe the authors should be able to address all of them.
- Even though the paper offers theoretical result on general mixing functions, all experiments seem to focus on linear mixing. The authors explain that the nonlinear mixing case is indeed much harder than the linear one (where Fast ICA is used). To me, the problem in the general nonlinear case is that the mixing parameter $\theta$ can be much harder to estimate accurately, so the estimation of the latent variables will also be negatively affected. So, I am not sure how practical the proposed algorithms would be for the general nonlinear case. Perhaps simple experiments with just 2 latent variables (but nonlinearly mixed) could offer some preliminary insights.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In Equation (8), it seems to me that the matrix $H$ was never previously introduced.
- In Equation (8), it seems to me that the $t$ index is not present in the right hand side of (8). Is it possible that the authors meant $\psi_{l,t}(\lambda)$ instead of $\psi_{l,T}(\lambda)$?
- Similarly, in Equation (9), shouldn't the upper index on the summation operator be $\min[l+w-1,t-1]$? At least, this is what the authors probably imply in Line 169.
- In Line 164, it is explained that any $\lambda_t$ takes values in $[0,1]^k$, but then in Line 168 $\lambda$ appears to have $T-1$ components. Obviously, this cannot work for the Hadamard product $\lambda \circ \tilde{s}\_{\tau \mid l}$. Shouldn't any given $\lambda$ have $k$ components in total? I believe the authors probably mean the sequence of $\lambda$'s in Line 168, but I think this should be explained more clearly.
- In Figure 2a, how exactly is the offline optimal policy generated? Do the authors rely on the prior literature for this purpose?
- The experiments by the authors seem to rely on linear mixing matrices. As the authors explain, this is by far the simplest setting, unlike general nonlinear mixing functions. Have the authors experimented with nonlinear mixing functions, even with just 2 latent variables? It would be interesting to see how the whole framework (including the estimation of the latent variables) is affected in such a case. Furthermore, the paper's theory covers nonlinear/general mixing functions.
- In Line 779, the authors make us of the $l_4$ norm. They claim that this is because of the Cauchy-Schwarz inequality. My understanding is that the Cauchy-Schwarz inequality $\mid <u, v> \mid \geq \mid\mid u \mid\mid \cdot \mid\mid v \mid\mid$ (where the norms are $l_2$-norms is only true for the $l_2$ norm. I was not clear how the $l_4$ norm suddenly appeared there. Could the authors detail their calculations?
- Equation (42) in Line 850 also involves the $l_4$ norm, but I think it should be totally analogous to the point above.
- In Equation (32), the authors define $\psi_{t,T}$ but then in the right-hand side of (32) there is no $t$. Why not just use the same inner summation expression as in (31) instead of introducing new indexes? Furthermore, the inner summation expression in (31) has $-1$ as the upper index - why then in (32) this is changed to $\min[l+w-1,T-1]$ (or, perhaps, $\min[t+w-1,T-1]$)?
- Line 813 makes sense mathematically if $(R+B^{\top}PB)^{-1}$ is a positive definite matrix. Have the authors shown that formally? I was not so clear what is going on with matrix $P$ appearing inside it.
- I am not sure I understood the claim in Line 801.The authors claim that Lemma 2 implies that $J^*=\Omega(T)$ because $f(\mathbf{s}_t,\theta)$ is never 0 by assumption. However, in theory someone might claim that the claim could be violated if $f$ can become arbitrarily close to 0 but never 0 exactly. Obviously, this is not possible for continuous function, but might be true for strange functions. For general functions, one might instead wan to impose that $f$ is lower bounded by some positive constant. That said, I feel that this is not a problem in this work, because $f$ is assumed Lipschitz, hence it must also be continuous (and even uniformly continuous).
- For a more rigorous exposition, could the authors discuss why $\zeta_l(\lambda)^{\top}H\zeta_l(\lambda)$ are convex functions of $\lambda$ $\forall l$ in the linear mixing case or when $f$ is convex, so that OCO regret bounds are applicable in these cases?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your high-quality, very detailed, and insightful comments. They are very helpful and we sincerely appreciate a lot for what you dedicated in the reviewing process. Below are our responses to the questions:
***Question 1***. Thank you for pointing out the typo. The matrix $ H \coloneqq B(R+B^\top P B)^{-1} B^\top$ is indeed defined in Appendix E (see Theorem E.1). We will revise the manuscript to include a definition of $H$ before Eq. (8).
***Question 2***. Thank you for catching this error. The correct notation indeed involves summation over $\ell$ rather than $t$, aligning with the notation used in Lemma 1. The corrected equation should read:
$$
J(\pi(\lambda)) - J^{\star} = \sum_{{\color{red}\ell=0}}^{T-1}\psi_{\ell,T}^{\top}(\lambda) H \psi_{\ell,T}(\lambda).
$$
We will update this in the manuscript.
***Question 3***. In Equation (9), the upper index on the summation operator is indeed accurate. The clarification in Lines 168-169 highlights that the per-step cost $ \psi_{\ell,T}^\top(\lambda) H \psi_{\ell,T}(\lambda)$ in Equation (8) is dependent on the time horizon $T$. Therefore, it cannot be solved as a canonical online optimization problem at each time $t$ since in general $\psi_{\ell,T}(\lambda)\neq \psi_{\ell,t}(\lambda)$ for $t<T$ (see Figure 7 in Appendix D for a pictorial explanation). This necessitates a tailored approach for online learning enabled by Lemma 1, which addresses this challenge and derive an equivalent optimization.
***Question 4***. Yes, in Equation (8), the total regret is defined as a function of the sequence $\lambda = (\lambda_0, \ldots, \lambda_{T-1})$, where each $\lambda_t$ within the sequence, belonging to the interval $[0,1]^k$, is learned online. We will clarify in the revised manuscript. Thank you for the comment!
***Question 5***. In Figure 2(a), the offline optimal controller is derived given all future perturbations $(Cw_t:t\in [T])$ of the following discrete-time kinematic model (similar models can be found in [Li et al. 2019] and [Yu et al. 2020]):
$$\begin{bmatrix}
\Delta x_{t+1}
\\\\
v_{t+1}
\end{bmatrix} = A \begin{bmatrix}
\Delta x_{t}
\\\\
v_{t}
\end{bmatrix} + B u_t + C w_t,
$$
for some matrices $A\in\mathbb{R}^{4\times 4}$, $B\in\mathbb{R}^{4\times 2}$, $C\in\mathbb{R}^{4\times 4}$. A detailed description of the drone piloting problem can be found in Example 1, Appendix B.2 and Appendix C.1. The offline optimal control is known as (see [Goel et al. 2022])
$$u^*_t=-Kx_t-(R+B^{\top}PB)^{-1}B^{\top}\psi^*_t$$
where
$\\psi_t^*=\\sum_{\\tau=t}^{T-1}(F^{\\top})^{\\tau-t} PCw_t$,
assuming the terminal cost-to-go matrix is $P$. Note that this is equivalent to the MPC-type action in Eq. (5) with true predictions of latent variables and mixing parameters.
***Question 6***. Thank you for the great suggestion. We have been working on examples for nonlinear mixing functions, and have discussed them in the **Global Rebuttal**.
***Question 7***. Thank you for the question. The $\\ell_4$-norm appears because of the Hadamard product between $\\mathbf{1}\_k-\\lambda$ and $s_{\\tau}$. The detailed derivation is as follows.
Applying the Cauchy–Schwarz inequality, the term in Line 778 satisfies
$$||(\\mathbf{1}\_k-\\lambda)\\circ {s_{\\tau}}||\_{2}=
\left(\sum_{i=1}^{k}(1-\lambda(i))^{2}(s_{\tau}(i))^{2}\right)^{\frac{1}{2}}\leq
\left[\left(\sum_{i=1}^{k}(1-\lambda(i))^4\right)^{\frac{1}{2}}\left(\sum_{i=1}^{k}(s_{\tau}(i))^4\right)^{\frac{1}{2}}\right]^{\frac{1}{2}}$$
where the RHS is
$$\left(\sum_{i=1}^{k}(1-\lambda(i))^4\right)^{\frac{1}{4}}\left(\sum_{i=1}^{k}(s_{\tau}(i))^4\right)^{\frac{1}{4}}=||\\mathbf{1}\_{k}-\lambda||\_{4}||s\_{\tau}||\_{4}.$$
Note that the Cauchy–Schwarz inequality applies to the inner product between the vectors $\\left((1-\lambda(1))^{2},\ldots,(1-\lambda(k))^{2}\\right)$ and
$\\left(s_{\tau}(1))^{2},\ldots,(s_{\tau}(k))^{2}\\right)$ whose entries are already squared.
***Question 8***. Yes, Eq. (42) can be derived similarly.
***Question 9***. We apologize for the typo in Eq. (32). The original term $\psi_{t,T}$ should be replaced by $\psi_{\ell,T}$. We changed the time index in the summation from $t$ to $\ell$ to avoid abusing the online time index $t$ but didn't fix the index in $\psi_{t,T}$. Given a fixed prediction window size $w$, the upper index would be $\min\\{\ell+w-1,T-1\\}$ in Eq. (31). Thank you again for correcting us. We have checked all the equations thoroughly in the revised manuscript to make them consistent.
***Question 10***. The property of $P$ being positive definite is guaranteed as a solution of the Discrete-time Algebraic Riccati Equation (DARE). More specifically, given positive definite $Q$ and $R$, $P$ as a solution of the following DARE
$$
P=Q+A^\top P A - A^\top PB (R+B^\top P B)^{-1} B^\top PA.
$$
is symmetric and positive definite. This is a well-known result in classic optimal control theory (c.f.
discussions in [Richardson 1986]), and $P$ determines the solution of LQR and LQG.
***Question 11***. Yes, you have used the Lipschitz continuity of $f$. We will clarify in the revised proof.
***Question 12***. Due to space limitations, we have appended a proof in **Global Rebuttal** above.
```REFERENCES```
1. [Li 2019] Yingying Li, Xin Chen, and Na Li. "Online optimal control with linear dynamics and predictions: Algorithms and regret analysis." Advances in Neural Information Processing Systems 32 (2019).
2. [Yu 2020] Chenkai Yu, Guanya Shi, Soon-Jo Chung, Yisong Yue, and Adam Wierman. "The power of predictions in online control." Advances in Neural Information Processing Systems 33 (2020).
3. [Goel 2022] Gautam Goel, and Babak Hassibi. "The power of linear controllers in LQR control." 2022 IEEE 61st Conference on Decision and Control (CDC). IEEE, 2022.
4. [Richardson 1986] Richardson TJ, Kwong R. On positive definite solutions to the algebraic Riccati equation. Systems & control letters. 1986 Apr 1;7(2):99-104.
---
Rebuttal 2:
Title: Additional Responses to Question 3
Comment: We greatly appreciate all the questions received. Further responses to the major questions, including those regarding Questions 1-11, are included in the formal rebuttal and **will be revealed on August 6, 11:59 PM AoE, after the rebuttal period**.
In this official comment we summarize our responses provided earlier and provide detailed answers to Question 3 and 12.
```1. Errors in Formulas``` Thank you for identifying errors in various formulas throughout the manuscript. We will conduct an extensive review of the entire manuscript. We present responses to the issues one-by-one in the rebuttal section.
```2. Theoretical Claims and Assumptions``` Thank you for your feedback regarding the clarity of certain theoretical claims in our manuscript. We defer our responses to the rebuttal section, where we carefully respond the questions to ensure that all your concerns are adequately addressed.
```3. Nonlinear Mixing Case``` Thank you for your insightful comments regarding our focus on linear mixing in the experiments. We provide additional discussions in **Global Rebuttal**.
Below we include additional answers to Questions 3 in this comment.
***Question 3.*** For further context, consider the application of online optimization strategies to minimize total regret as depicted in Equation (8). At each time $t$, we obtain $\psi_{\ell,0}(\lambda),\ldots,\psi_{\ell,t}(\lambda)$. For $\psi_{\ell,t}(\lambda)$, since the summation is over $\tau\in \\{\ell,\min\\{\ell+w-1,t-1\\}\\}$, $\psi\_{\ell,t}:\mathcal{I}\rightarrow\mathbb{R}^{n}$ is a function that depends on $t\in [T]$. But in the original total regret formula (up to time $t$) $\sum_{\ell=0}^{t-1}\psi_{\ell,T}^{\top}(\lambda) H \psi_{\ell,T}(\lambda)
$ relies on future $\psi_{\ell,t}(\lambda),\ldots,\psi_{\ell,T-1}(\lambda)$.
Thus, the formulation of the offline optimization (8) does not conform to a canonical framework suitable for online optimization, necessitating the role of Lemma 1. We will revise the manuscript to clarify.
---
Rebuttal Comment 2.1:
Title: thank you for rebuttal
Comment: I appreciate the authors' very detailed rebuttal. I believe these clarifications significantly improve the exposition and the manuscript clarity. For this reason, I will increase my score. Some clarifications may have seemed obvious, but I still think it is good to incorporate them in the revised version of the manuscript, as not all readers will be familiar with all the details. Regarding the experiments with nonlinear mixing functions, I do realize this case is much more challenging but the preliminary results presented in the rebuttal are encouraging.
---
Rebuttal 3:
Comment: Thank you once again for all the comments. We appreciate your feedback on the preliminary results with nonlinear mixing functions and will incorporate the clarifications in the revised manuscript. Please let us know if there are any additional concerns or suggestions.
---
Rebuttal Comment 3.1:
Comment: I meant to raise my score to 7, not 6. Sorry for the inconvenience. | Rebuttal 1:
Rebuttal: We greatly appreciate all the questions received. In this global rebuttal, we provide more detailed discussions.
***Nonlinear Mixing Function:***
Regarding Question 6 from F6Jr, we provide a simple experiment involving two latent variables to offer preliminary insights. While our current experiments focus on the linear setting, DISC’s implementation can be efficiently generalized to nonlinear scenarios.
Following the setting in our paper, we use the same $A$ and $B$ as in the paper's tracking example. The disturbance $w = [w\_1, w\_2, w\_3, w\_4]$ is generated nonlinearly from a two-dimensional latent variable $s = [s\_1, s\_2]$, where $s\_1$ records absolute value of a sinusoidal curve, and $s\_2$ is that of a Laplacian noise. We set $w\_1=w\_3= 2s\_1+3s\_2^2$, and $w\_2 = w\_4 = 4s\_1^2-3s\_2$.
For disentanglement in this nonlinear setting, we employ Slow Feature Analysis (SFA) with nonlinear features. SFA is an unsupervised learning algorithm that extracts slowly varying features from quickly varying input signals. It is closely related to ICA and is a strong candidate for blind source separation problems. We extend the linear mappings of vanilla SFA to nonlinear by expansion into a nonlinear basis using scikit-learn’s *PolynomialFeatures*.
The online learning of confidence parameter $\lambda$ involves computing the gradient of $\zeta^{\top} H \zeta$ w.r.t. $\lambda$, which can be computed similarly to the linear mixing setting. In this nonlinear example, we employ a neural network to approximate the mixing function, using its inference and the Jacobian computation to calculate the gradient efficiently (also see our response under 'Implementation complexity' to Reviewer 6dfr for more details). In addition, DISC's performance naturally depends on the performance of the employed nonlinear disentanglement algorithm. Developing more efficient and stable disentanglement methods is an interesting direction for future research.
Experimental results show the baseline self-tuning policy achieves a cost ratio (CR) of **3.22**, the MPC with a CR of **3.30**, and the LQR with a CR of **8.28**. Nonlinear DISC achieves a CR of **2.22** in this example of nonlinear mixing functions, beating other baselines. All results are averaged over 5 runs. We also include nonlinear DISC's learning curves of each component of $\lambda$ in the attached **PDF**. We find that $\lambda_0$, which corresponds to the more predictable latent s1 (sinusoidal curve), quickly converges to the optimal value of 1. Conversely, $\lambda_1$ which corresponds to the more unpredictable latent $s\_2$ (Laplacian noise) stays below 0.5. We will add more examples in the revised manuscript and make the code public.
***Convexity of $\zeta\_{\ell}(\\lambda)^{\\top}H\zeta\_{\ell}(\\lambda)$:***
Regarding Question 12 from Reviewer F6Jr, below we verify that $\zeta\_{\ell}(\\lambda)^{\\top}H\zeta\_{\ell}(\\lambda)$ is a convex function of $\\lambda\\in [0,1]^{k}$.
By definition,
$$\zeta\_{\ell}(\\lambda)^{\\top}H\zeta\_{\ell}(\\lambda)=\\left[\\sum\_{\tau=\overline{\ell}}^{\ell}(F^{\top})^{\ell-\tau}P\\left(\theta s\_{\ell}-\widetilde{\theta}\_{\tau}(\lambda\circ \widetilde{s}\_{\ell|\tau})\\right)\\right]^{\top}H\\left[\\sum\_{\tau=\overline{\ell}}^{\ell}(F^{\top})^{\ell-\tau}P\\left(\theta s\_{\ell}-\widetilde{\theta}\_{\tau}(\lambda\circ \widetilde{s}\_{\ell|\tau})\\right)\\right],$$
which equals to (since $H$ is symmetric)
$$\\underbrace{\\left[\\sum\_{\tau=\overline{\ell}}^{\ell}(F^{\top})^{\ell-\tau}P\theta s\_{\ell}\\right]^{\top}H \\left[\\sum\_{\tau=\overline{\ell}}^{\ell}(F^{\top})^{\ell-\tau}P\theta s\_{\ell}\\right]}\_{\text{constant independent of } \lambda}+\\left[\\sum\_{\tau=\overline{\ell}}^{\ell}(F^{\top})^{\ell-\tau}P\widetilde{\theta}\_{\tau}(\lambda\circ \widetilde{s}\_{\ell|\tau})\\right]^{\top}H \\left[\\sum\_{\tau=\overline{\ell}}^{\ell}(F^{\top})^{\ell-\tau}P\widetilde{\theta}\_{\tau}(\lambda\circ \widetilde{s}\_{\ell|\tau})\\right]-2\\left[\\sum\_{\tau=\overline{\ell}}^{\ell}(F^{\top})^{\ell-\tau}P\theta s\_{\ell}\\right]^{\top}H \\left[\\sum\_{\tau=\overline{\ell}}^{\ell}(F^{\top})^{\ell-\tau}P\widetilde{\theta}\_{\tau}(\lambda\circ \widetilde{s}\_{\ell|\tau})\\right].$$
Now, we simplify the remaining two terms. Notice that $\\sum\_{\tau=\overline{\ell}}^{\ell}(F^{\top})^{\ell-\tau}P\widetilde{\theta}\_{\tau}(\lambda\circ \widetilde{s}\_{\ell|\tau}) = \Lambda \lambda$ for some matrix $\Lambda\in\mathbb{R}^{n\times k}$. To see this, denote $\digamma\_{\ell,\tau}:=(F^{\top})^{\ell-\tau}P\widetilde{\theta}\_{\tau}\in\mathbb{R}^{n\times k}.$ We get
$$\\sum\_{\tau=\overline{\ell}}^{\ell}(F^{\top})^{\ell-\tau}P\widetilde{\theta}\_{\tau}(\lambda\circ \widetilde{s}\_{\ell|\tau}) = \\sum\_{\tau=\overline{\ell}}^{\ell}\digamma\_{\ell,\tau}(\lambda\circ \widetilde{s}\_{\ell|\tau})=\begin{bmatrix}\vdots & & \vdots \\\\ \digamma\_{\ell,\tau}(1) & \cdots & \digamma\_{\ell,\tau}(k) \\\\ \vdots & & \vdots \end{bmatrix} \begin{bmatrix} s_1\lambda_1 \\\\ \vdots \\\\ s_k \lambda_k \end{bmatrix} = \underbrace{\begin{bmatrix}\vdots & & \vdots \\\\ s_1\digamma\_{\ell,\tau}(1) & \cdots & s_k\digamma\_{\ell,\tau}(k) \\\\ \vdots & & \vdots \end{bmatrix}}\_{=:\Lambda} \begin{bmatrix} \lambda_1 \\\\ \vdots \\\\ \lambda_k \end{bmatrix}.$$
Therefore, it suffices to validate the matrix $H$ is positive semi-definite to show $(\Lambda \lambda)^{\top} H (\Lambda \lambda)-2\phi^{\top} H (\Lambda \lambda)$ is convex where $\phi:= \\sum\_{\tau=\overline{\ell}}^{\ell}(F^{\top})^{\ell-\tau}P\theta s\_{\ell}\in\mathbb{R}^{ n}$. Note that $H=B(R+B^\top P B)^{-1} B^\top$. Since $R$ is positive definite by assumption and the DARE solution $P$ is also positive definite (see the response to Question 10), $(\Lambda \lambda)^{\top} H (\Lambda \lambda)-2\phi^{\top} H (\Lambda \lambda)$ is a summation of a quadratic form and a linear function of $\lambda$, validating the convexity of $\zeta\_{\ell}(\\lambda)^{\\top}H\zeta\_{\ell}(\\lambda)$.
Pdf: /pdf/72d58a9e315f9f838e2ee721a41e704cf08ed39d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Statistical Estimation in the Spiked Tensor Model via the Quantum Approximate Optimization Algorithm | Accept (spotlight) | Summary: The paper studies the performance of the Quantum Approximate Optimization Algorithm (QAOA) for a classical average case problem from high dimensional statistic: tensor principal component analysis (tPCA), which exhibits a computational-statistical gap. The paper investigates if this algorithm can achieve a quantum advantage over its classical counterparts. The paper makes progress towards this question and suggests that the answer to expect is somewhat negative (only for QAOA). The main results are
1. After 1 step of QAOA, it archives weak recovery at the same SNR threshold (up to constants) as achieved by 1 step of clssical tensor power iteration.
2. Using heuristic calculations (but not rigorously), they further showed that even after (some constant) $p$ steps of QAOA, the estimator succeeds at weak recovery at the same SNR as the tensor power iteration. This further suggests that even after using tensor unfolding, QAOA won't be able to surpass the computational threshold for this problem.
3. Along the way, they observe a sine Gaussian law for the asymptotic distribution of the overlap between the estimator (after $p$ steps of QAOA) and the ground truth. Again, this is proven for $p=1$ steps but empirically verified for $p>1$.
Background: The problem is known to have a computational statistical gap. In particular, in the parameterization (1.1) taken in this paper, (a) recovery is possible whenever the SNR $\lambda \gg 1$, (b) but the threshold for efficient algorithms is known to be $\lambda \approx n^{(q-2)/4}$. Several classical algorithms including tensor unfolding, sum-of-squares, or gradient descent with landscape smoothing are known to achieve this. For iterative algorithms, such as the tensor power iteration and vanilla GD, the threshold is further away, requiring $\lambda \approx n^{(q-2)/2}$, and thus, to achieve the computational threshold either tensor unfolding (for power iteration) or landscape smoothing (for GD) is required.
Strengths: 1. The paper analyzes one of the important quantum algorithms, for an important problem in high dimensional statistics to seek to answer if there is a quantum advantage. The results in the paper are suggestive that the answer to expect is negative.
2. The paper is well-written, and for heuristic claims, provided clean numerical simulations.
Weaknesses: I do not see any major weaknesses in the paper. Only a small quibble is a place in the introduction in lines 39-40 (and also in the abstract lines 1-3), where the authors present the motivation as seeking whether QAOA has superpolynomial speedup from clssical algorithms. However, I found this motivation slightly hand-wavy. I could not find enough concrete justification for why looking at (the combination of) QAOA for tensor PCA is a promising avenue for demonstrating this. If authors can make this more concrete, that would be helpful.
On the other hand, just studying the performance of QAOA for tensor PCA is an important question in its own right, as very well justified in lines 40-51. The authors do make good progress towards this.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Could the author elaborate on how to combine QAOA with tensor unfolding? In more detail, why do the results after $p$ steps of unfolded tensors suggest that we could not surpass the computational threshold after using QAOA on unfolded almost square matrix?
2. In a non-quantum setup, a more standard is to take the prior to be uniform over a sphere. Can authors describe why it was important to take it to be uniform over the hypercube (in slightly more detail than in footnote 1)? I am just trying to understand the difficulty in the analysis if the prior was uniform over a sphere.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive comments on the importance of the algorithm and the problem under study. To make the motivation more concrete: We believe that tensor PCA is a promising avenue for demonstrating quantum advantage because the computational-statistical gap in the spiked tensor problem is huge compared to other problems. The gap here is a polynomial factor separating the required $\Omega(n^{(q-2)/4})$ SNR for the best known algorithm vs. the information-theoretically sufficient $O(1)$ SNR; compare this to a constant factor gap in many other cases such as spin-glass optimization [45]. Hence, it is reasonable to conjecture that a better (quantum) algorithm exists for this problem. Our choice of focus on the QAOA is because it is a novel quantum algorithmic primitive that is both realistic for near-term implementations and guaranteed to succeed when the algorithmic depth p grows unboundedly with problem size. We concur with the reviewer that the question of studying QAOA on tensor PCA is an important question on its own, and we will augment the justifications in the main text with the above discussion.
To answer the reviewer’s questions:
1) The idea about combining QAOA with tensor unfolding is explained in Remark 3.11, but we will elaborate further here. Essentially, upon an input spiked tensor, we partition the tensor indices into two groups to obtain a 2-tensor which is a spiked matrix that we call the unfolded tensor. We can apply the QAOA to the resultant spiked matrix, with a rescaled SNR $\bar{\lambda}_n = \lambda_n/n^{(q-2)/4}$. Additionally, our Claim 3.7 implies that as $p\to\infty$ (after taking $n\to\infty$), the QAOA can solve the spiked matrix problem with $\bar{\lambda}_n \approx 1$. This implies that the QAOA with tensor unfolding can solve the original spiked tensor problem with $\lambda_n = \Theta(n^{(q-2)/4})$ which achieves but does not surpass the classical computational threshold. Our result does not rule out the possibility that the QAOA can surpass the computational threshold when p grows unboundedly with n, or potentially in combination with a more clever trick.
2) The issue here is that the QAOA can only output bit-strings, which live on the hypercube. In practice, when the hidden prior is on the sphere, we can still run the QAOA to get an estimator on the hypercube, but this may not be very natural. Our theoretical framework can be certainly used to analyze the overlap between the hypercube estimator and the spherical prior, and we believe the result will be similar. Another potential solution for a spherical prior is to develop a variant of the QAOA for continuous variables, but this is outside the scope of our work.
---
Rebuttal Comment 1.1:
Comment: Thanks for answering my questions! I would be happy to see this paper appear at the conference! | Summary: The paper investigates the performance of the Quantum Approximate Optimization Algorithm (QAOA) on the spiked tensor model problem. The authors demonstrate that QAOA's weak recovery threshold aligns with that of tensor power iteration and show through heuristic calculations that multi-step QAOA could potentially match but not exceed the classical computation threshold. A notable finding is the sine-Gaussian law for the asymptotic overlap distribution of p-step QAOA verified by simulations, which is distinct from classical methods and suggests a modest quantum advantage. The paper employs novel techniques, including Fourier transforms, to analyze the QAOA's performance and concludes with implications for potential quantum advantage in statistical inference problems.
Strengths: - The paper prove that the weak recovery threshold of 1-step QAOA matches that of 1-step tensor power iteration, which is a new theoretical result for analyzing QAOA.
- The paper uses heuristic calculations to characterize the asymptotic overlap distribution of p-step QAOA, showing that the ability is similar to the multi-step tensor power iterations.
- Their proof techniques includes Fourier transform to handle exponential sums, which may be novel in the analysis of QAOA algorithms.
Weaknesses: - The results indicate that constant-step QAOA does not improve the recovery threshold beyond what is achievable by classical tensor power iteration by more than a constant factor, suggesting that the quantum advantage is modest.
- The paper does not address the performance of QAOA with more circuit depths, which is an open question and could be crucial for demonstrating a strong quantum advantage.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Could the authors explain more about the generality of their proof techniques, i.e., could their proof techniques be used in analysis for QAOA algorithms in other problem settings?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - The analysis for p-step QAOA (where p > 1) relies on heuristic arguments from physics, which may not be as rigorous as desired.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments on the novelty of our results. Regarding the weaknesses mentioned, we acknowledge that the quantum advantage is modest. Before our work, the extent of quantum advantage that the QAOA could provide on the spiked tensor problem was unknown, especially since previous hardness results did not apply in this setting. Therefore, understanding the power and limitations of the QAOA for this problem is an important step forward. While our analysis is limited to depths p that do not grow with n, this regime is also the most realistic for near-term quantum computers. Nevertheless, we agree that analyzing this algorithm at super-constant depths is an important open question worthy of future work.
In terms of generalizing our techniques to more problems, we believe our methods can be also applied to study the performance of the QAOA on other problems such as planted clique, sparse PCA, stochastic block model and Bayesian linear models, to name a few. Furthermore, by setting $\lambda=0$, our formula recovers previous results on the QAOA applied to spin-glass models (which can be seen in the derivations in Appendix D.4). This demonstrates the broader applicability of our methods beyond the specific problem studied in this paper.
To address the limitation regarding heuristics arguments, we refer the reviewer to our global rebuttal where we discuss the potential of making our derivation more rigorous. There, we also discuss additional evidence for the correctness of the heuristics (that is, the use of Dirac delta functions and interchanging order of limits), which are further corroborated by our numerical experiments.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response! The authors have addressed my questions satisfactorily, so I adjusted the score accordingly. | Summary: The quantum approximate optimization algorithm is analyzed for the spiked tensor model. Weak recovery
of 1-step QAOA is rigorously shown to matche that of 1-step tensor power iteration. Heuristic calculations for p-step QAOA
matche that of p-step tensor power iteration.
Strengths: There have been many works on tensor revorery fro such statistical models within classical inference. A very small number of works have attacked the quantum algorithmic aspect. This paper is therefore very welcome. The results are interesting and clearly expalined.
Weak recovery of 1-step QAOA is rigorously shown to matche that of 1-step tensor power iteration. Heuristic calculations for p-step QAOA
matche that of p-step tensor power iteration. the paper argues that
that multi-step QAOA with tensor unfolding could achieve,
the asymptotic classical computation threshold of spiked q-tensors.
The asymptotic overlap distribution for p-step QAOA is characterized and
some sort of sine-Gaussian law is observed (through simulations).
Weaknesses: For p-step QAOA the analysis is not rigorous. The observation of the intriguing sine-Gaussian law is numerical. Further analysis will be needed.
Technical Quality: 3
Clarity: 3
Questions for Authors: Maybe authors could discuss the limitations of implementing such algorithms on NISQ devices ? Any realistic prospects ?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Maybe authors could discuss the limitations of implementing such algorithms on NISQ devices ? Any realistic prospects ?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments on the importance of understanding the power of quantum algorithms for statistical inference problems. Regarding the weakness mentioned, we remark that our analysis and derivation of the sine-Gaussian law is completely rigorous at p=1. Additionally, for the general-p analysis, we have strong evidence supporting the correctness of our approach, not only from numerical simulation but also from the fact that our framework reproduces known rigorous results in different limits. We explain this in more detail in our global rebuttal.
Addressing the question on implementation, we remark that the QAOA is quite NISQ-friendly and has already been implemented in practice for various problems, as demonstrated for example in Refs. [16-18]. However, some limitations of noisy implementations of QAOA are known, particularly when the problem topology does not match the hardware connectivity (see e.g., [arXiv:2009.05532]). These limitations can be mitigated to some extent with simple forms of error-correction. Nevertheless, our paper is focused on the theoretical analysis of the performance and limitations of the noiseless QAOA. A detailed discussion about potential implementations on near-term quantum devices is beyond the scope of this work.
---
Rebuttal Comment 1.1:
Comment: The authors have answered my question satisfactorily. I suggest adding a few pointers to the implementations of QAOA on curent devices to guide potentially interested readers. I understand this aspect in beyond the scope of the paper but if space allows a few pointers and comments woulb be welcome.
---
Reply to Comment 1.1.1:
Comment: We thank Reviewer a21g for their positive feedback and suggestion. In line with their recommendation, we will add more discussion about implementations of the QAOA in our revision. | Summary: This submission proposed to use quantum approximate optimization algorithm (QAOA) to compute the maximum likelihood estimator in the statistical estimation problem of the spiked tensor model.
Using the overlap between the estimated vector and the original vector, the author(s) obtained rigorous analysis for the so-called weak recovery threshold for 1-step QAOA. Namely, above such a threshold, the overlap will be non-zero with non-trivial probability; otherwise, the overlap will vanish with high probability. The author(s) also showed that the established weak recovery threshold matches the 1-step tensor power iteration classical algorithm.
For the p-step QAOA, the author(s) also obtained the weak recovery threshold based on some heuristic argument, which also matches that of the p-step tensor power iteration algorithm. Numerical experiments show that the QAOA method could achieve the state of the art threshold by combining with tensor unfolding (a technique used in the state of the art algorithm).
As far as I know, the submission proposed a new technical analysis for QAOA (by showing that the overlap exhibits an asymptotic sine-Gaussian distribution), providing a rigorous study of the polynomial-time QAOA.
As such, I believe the work is worth sharing to the community. Hence, I recommend accept.
(I did not have enough time to check all the derivations provided in appendices in details, but the overall proof ideal seems logical to me.)
Strengths: 1. Rigorous analysis for the 1-step QAOA, rigorous analysis for the p-step tensor power iteration, and detailed comparison between the two algorithms.
2. Discovering an intriguing sine-Gaussian law also verified through numerical simulations.
3. The manuscript is well written and explained.
Weaknesses: 1. The asymptotic analysis requires the number of qubits n approaching infinity, which is practically demanding.
2. The analysis of the p-step QAOA is based on an heuristic calculation.
3. It is not known if QAOA could achieve the same threshold as in the state of the art classical algorithm.
Numerical experiments do provide some potentials of matching the result. Whether there is a quantum advantage remains unclear.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. In Remark 3.10, the author(s) claimed that for certain parameter p, the QAOA has a constant factor advantage over the classical power iteration algorithm in the overlap achieved.
The overlap via the p-step tensor is proved in Proposition 3.9. However, the overlap via QAOA given in (3.12) is based on heuristic calculations.
Hence, I'm not sure such an advantage is rigorous. If I did not misunderstand anything, the wording of claiming the constant factor advantage needs to be modified.
2. In Discussion, the author(s) asserted that "This implies that achieving a strong quantum advantage via the QAOA requires using a number of steps p that grows with n."
I understand that multiple (possibly infinite many) steps is needed to achieve a better performance for QAOA.
However, it is still not analytically evident if QAOA can match the state of the art threshold, because a heuristic calculation is required in the analysis.
Hence, whether there is a rigorous advantage for QAOA still remains unclear to me, even p approaching infinity.
3. As above, even if author(s) can rigorously show that p-step QAOA outperforms the p-step tensor power iteration, it probably not accurate to call it a "modest quantum advantage", since the p-step tensor power iteration is not the state of the art algorithm.
I highly recommend the author(s) to be more careful about the phrasing of quantum advantage throughout the paper.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The limitations are addressed in the manuscript. I also summarized them in Weaknesses.
Yet, since this submission is a theory work, I think the practical limitation is not a big concern.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for a positive assessment of our paper. We refer the reviewer to the global rebuttal for our response regarding the weakness of using heuristic calculation. Moreover, our asymptotic results in the $n\to\infty$ limit also show good agreement with numerical experiments at finite $n$. In particular, the average squared overlap with the signal at finite $n$ appears to converge to our infinite-n prediction with order $1/n$ deviations as shown in Figure 2 and 4.
We now address the reviewer’s questions one-by-one:
1) We stress that the constant factor advantage is in fact rigorous for $p=1$ and $q\ge3$, as seen in the first row of Table 1. However, we acknowledge that the constant factor advantage seen for $p>1$ is currently not fully rigorous, and will revise our manuscript to say “conjectured advantage” in those cases.
2) We recognize that our results currently do not rigorously show an advantage for the QAOA against the state-of-the-art classical threshold. Although it is rigorously known that the QAOA can compute the MLE and (weakly) recover the signal when the depth p grows unboundedly with n, we do not rigorously know if it can do so in polynomial depths even with SNR at the classical threshold. One possible way to address this is extending previous computational universality results in Ref. [19] to show the QAOA can reproduce classical algorithms for spiked tensors within polynomial depths. Nevertheless, the scope of our present work is focused on analyzing the constant depth regime of QAOA in hopes of obtaining good performance with as little quantum resource as possible.
3) We acknowledge that the phrasing of “modest quantum advantage” may be misleading. We will revise our manuscript to more clearly indicate that this advantage is only over constant-step power iteration, which is not the best classical algorithm.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed my questions. Once the manuscript is revised accordingly, I can support this submission.
---
Reply to Comment 1.1.1:
Comment: We are grateful to Reviewer 9dK4 for their support. We will revise our manuscript according to their suggestions in the camera-ready version if this submission is accepted. | Rebuttal 1:
Rebuttal: In this global rebuttal, we address the concern raised by Reviewers 2, 3, and 4 about our use of physics-style heuristics and the rigor of theoretical results. We emphasize that our result at depth $p=1$ is fully rigorous. While the analysis for $p>1$ is not fully rigorous due to the use of Dirac delta functions and an unjustified change in order of limits, we expect that it could be made rigorous with more advanced harmonic analysis techniques in dealing with Dirac delta functions. For example, Ref. [25] adopted a heuristic approach for analyzing the QAOA performance of another problem, and showed that changing the order of limits can be justified with a series of bound estimates and the dominated convergence theorem.
In any case, we believe there is strong evidence for the correctness of our general $p$-step QAOA analysis because: (1) after setting $p=1$, our general-$p$ framework yields a result identical to our rigorous result at $p=1$ obtained with a different method, and (2) after setting $\lambda=0$, our result agrees with the prior rigorous result for the QAOA’s performance on spin glass models in Ref. [25]. The latter agreement is implicit in the discussion of Appendix D.4. Furthermore, our heuristic derivation can be viewed as an approach to obtain the correct result that is simpler than the rigorous method in Ref. [25], and may be applied to more general problems. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper studies the performance of 1-step and multi-step quantum approximate optimization algorithm (QAOA) for spiked tensor problem. In this problem one observes a q-dimentional tensor which is a properly normalized linear combination of q-th tensor power of unknown vector $u \in \{+1, -1\}^n$ and Gaussian noise $W$: $\lambda u^{\otimes q} / n^{q/2} + 1/sqrt(n)\cdot W$. The goal is to recover the unknown vector $u$ from the observed tensor. This problem is known to be statistically solvable for $\lambda > T$, for some absolute constant T; however, it is known that under common complexity assumption, the problem has polynomial time classical algorithm only for $\lambda = \Omega(n^{(q-2)/4})$, providing a complexity gap.
This paper studies whether a specific family of quantum algorithms, called QAOA, can achieve a quantum advantage compared to classical algorithm. The paper proved negative result showing that constant-step QAOA applied to weak recovery problem in the spiked tensor model only achieves non-trivial overlap with the signal u when $\lambda = \Omega(n^{(q-1)/2})$ which nearly matches the threshold for the tensor power classical algorithm.
Strengths: The spiked tensor problem is a well-studied problem with important application, hence, understanding the performance of quantum algorithms applied to this problem is an interesting problem. To the best of my knowledge, this is the first paper that studies the performance of QAOA applied to this problem, showing that this family of quantum algorithms does not achieve an advantage over classical algorithms.
The proofs are quite technically involved and combine techniques such as the discrete Fourier transforms and the central limit theorem to handle combinatorial summations. I have not checked the proofs carefully, but skimming through them they look sound.
The authors provide a good overview of the prior work and clearly compare the current paper to prior results.
Weaknesses: I think that the contribution of this paper is somewhat limited in the sense that negative result is obtained only for a very particular family of quantum algorithms, which does not rule out that even small modifications can achieve quantum advantage for this problem. This is in contrast to classical results where gap is established for any classical algorithm under some standard complexity assumptions.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1) Do authors expect that under some standard assumptions, any BQP algorithm is not able to recover $u$ for $\lambda =o(n^{(q-2)/4})$?
2) Do you see other candidate problems where techniques developed in this paper can potentially be used to study the performance of QAOA?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: na
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive assessment of our paper and agree on the importance of studying quantum algorithms applied to the spiked tensor.
Regarding the mentioned weakness, we would like to point out that the various celebrated classical hardness results for the spiked tensor problem fall into two categories: (i) hardness for all polynomial-time classical algorithms under certain complexity-theoretical assumptions, as in Ref. [13]; or (ii) unconditional hardness results for specific families of classical algorithms, such as tensor power iteration, gradient descent, Langevin dynamics, spectral methods, and message passing algorithms (see Ref. [2-12]). Our work extends the second type of hardness results to the quantum realm by showing an unconditional limitation of a popular family of quantum algorithms on this problem.
To address the reviewer’s questions:
1) We recognize that our hardness result applies only to a specific family of quantum algorithms, but it does not rely on any assumptions. Although there is a conditional hardness result [13] that rules out all efficient classical algorithms by assuming a generalized version of the planted clique conjecture, it is unclear if this conjecture can be considered as a standard complexity assumption, especially in the quantum setting. While our result is suggestive that weak recovery for $\lambda=o(n^{(q-2)/4})$ is likely difficult for broader classes of quantum algorithms, such as low-depth quantum circuits, we do not have sufficient evidence to rule out all BQP algorithms.
2) We believe our techniques can be also applied to study the performance of the QAOA on other problems such as planted clique, sparse PCA, stochastic block models, and Bayesian linear models, among others. Furthermore, by setting $\lambda=0$, we recover previous results on the QAOA applied to spin-glass models (which is implicitly discussed in Appendix D.4). Therefore, we believe that our techniques have broader applicability beyond this paper, allowing one to study the QAOA on other problems, as well as other QAOA-like variational quantum algorithms.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer TiV7,
The author-reviewer discussion period is ending soon. Please check if the authors' response has addressed your concerns and feel free to adjust your score. If the authors' response is not satisfactory to you, please explain your reason and discuss with the authors *immediately*.
Best regards,
AC
---
Rebuttal Comment 1.2:
Comment: I thank the authors for their response, and I adjusted my score to 7. | null | null | null | null | null | null |
Embedding-Aligned Language Models | Accept (poster) | Summary: The paper proposes a method for prompting LLMs to generate content that optimizes an objective defined in a latent space through externally provided embedding spaces. To this end, they define a reinforcement learning agent (EAGLE) as follows: Given an entity (e.g. a movie description), an LLM is prompted to generate textual actions to change the movie in some way. Based on a chosen action, a separate LLM that acts as the environment performs the action (which is a textual prompt) on the given entity. By encoding the new entity in the embedding space, it's utility (externally given) can be computed. This RL agent is trained via a policy gradient method with a reference distribution, for which three alternatives are proposed. The results from the experimental evaluation with human raters on the MovieLens 25M dataset suggests that the RL agent significantly helps in finding novel movies (i.e., their descriptions) that users like.
Strengths: * An interesting solution to an interesting problem with significant relevance to the NeurIPS community.
* To the best of my knowledge, the proposed solution is quite novel.
* A rigorous evaluation that shows that the proposed method helps in the narrow domain it was evaluated on.
* The paper is largely well-written, although some details remain unclear despite careful reading.
Weaknesses: * The paper could improve on clarity in some parts. For example, the purpose of the reference distribution in this specific model, and the intuition behind the G-optimal design are not well explained (see questions below).
* The method is evaluated on only one use case and hence the evaluation is quite limited. It is unclear whether this method is good for other use cases as well, and how much case-specific tuning is needed, potentially taking away from the generality of the method.
Technical Quality: 4
Clarity: 3
Questions for Authors: * The purpose of the reference policy is not clear to me. Why does the EAGLE agent need to be anchored to a reference policy at all? Why are the three proposed choices reasonable?
* The purpose of the G-optimal design is not clear. Intuitively, what does it achieve?
* Line 179 states that each x \in \mathcal{X} is associated with a set of actions by prompting an LLM for it, which would result in |A| * K actions (with possible duplicates). But in line 205 it is stated that only K actions are generated. Which one is it? If it's the former, is the action set dependent on x? Is a separate EAGLE trained for each x? If it's the latter, how do you obtain it?
* Line 254 states that the reference policy is trained like next token prediction with actions (or rather action texts)? as targets. Doesn't that mean it can (and will) generate entirely new actions at inference time?
* Table 2: How would a baseline without any reference policy perform?
* Table 3: Using GPT-4 at test time improves considerably over using Gemini Ultra (according to human raters). What is your explanation for this? Could it be that GPT-4 simply generates better movies? Is it generally plausible that using a better LLM as environment will yield better results without retraining (as opposed to merely being robust as stated in line 307)? How could this be tested?
* Although your method seems novel, I think your related work section doesn't discuss other methods that try to optimize prompts for LLMs using RL, e.g. [1]. Could you elaborate on how your method differs from them?
[1] https://aclanthology.org/2022.emnlp-main.222.pdf
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive review and helpful feedback. We appreciate the fact that you found our work interesting and novel. Please find our response to your comments and suggestions below.
Re clarity:\
We’ll elaborate further on the reference policy and its use. We’ll also move more information from the appendix in the main body of the paper to explain the intuition about the G-Optimal Design, in particular, Figure 2 in the appendix, and its explanation. Additionally, we’ll elaborate more on ELM, and reduce the mathematical burden in reading the paper.
Re Experiments:\
Thank you for pointing this out. We will add additional experiments using the public Amazon dataset, using user profiles and embeddings as defined in [1,2]. We will use this dataset to identify content gaps (i.e., new products) in product space. We believe the addition of these experiments will strengthen our work.
Reference Policy:\
A reference policy is commonly used in Policy Gradient algorithms with LLMs (see [1,3,4]). The reference policy is usually trained without RL to be a good anchor to the RL training procedure. That is, the reference policy ensures the RL algorithm does not diverge too much from an initial distribution that we deem to be good enough. It has been shown that removing the reference policy in RL with LLMs can hurt performance.\
Unlike previous work, in our setting the use of the reference policy has an additional, key benefit. [5] shows that an epsilon-greedy RL strategy which uses a G-Optimal design distribution achieves efficient regret guarantees. In other words, w.p. epsilon, the algorithm samples an action from the G-Optimal design distribution. This induces favorable exploration. We therefore use a regularization to such a distribution in EAGLE, and show it indeed improves overall performance.
As you point out, we choose three distributions for reference policies. The uniform distribution mimics a uniform exploration strategy that a naive epsilon-greedy algorithm would use. The best-next-state action distribution mimics the methodology that is used by current LLM-RL algorithms, which anchor the RL algorithm to a “good” policy (see e.g., [1,3,4]). Finally, G-Optimal design mimics the exploration strategy proposed in [5], which allows for efficient exploration, and improved overall performance.
Further Intuition on G-Optimal Design:\
We will add further intuition of the G-Optimal design to the main part of the paper (currently illustrated in Appendix C and Fig. 2). A core challenge in our framework stems from the potential bias introduced by using LLMs to generate actions, leading to an action space that favors specific directions within the embedding space (as visualized in Fig. 2 in the paper). This bias can significantly affect EAGLE's exploration and prevent it from discovering optimal novel entities. To mitigate this, we employ G-optimal design. By minimizing a form of worst-case variance (Definition 1), G-optimal design selects a diverse subset of k actions that maximize exploration, ensuring that no single direction in the embedding space is overly favored.
State Dependent Action Space:\
As you correctly point out, the action space generated is dependent on the state space. We generate K actions for every x. Indeed, actions are dependent on the entity x (e.g., a change to a movie may be significantly affected by its plot). We do not need to train a separate EAGLE model for each x, since x is given to the policy as input (i.e., we treat it as the description of the state). In our experiments the description includes a movie’s plot, reasons to like, dislike, and a user profile.
Next Token Prediction:\
You are correct. As we train our policy using next token prediction, it is not necessarily constrained by the set of candidate actions. The policy learns to generalize and generates different actions. This is also evident when we test EAGLE on movies it was not trained on, where it generates new actions.
Re GPT-4:\
We found that raters prefer results generated by GPT-4. This could generally be due to rater’s preferred generations of a GPT-4 environment vs. a Gemini-Ultra environment (e.g., creativity).
Re Related Work:\
Our work’s novelty focuses on two key aspects: (1) aligning to an embedding space, and (2) exploration using G-Optimal design. We are not aware of work using LLMs that has attempted to solve either of these specific problems. But apart from related work discussed, your suggestions are greatly appreciated. We will add further discussion of related work to our paper, including:
- Aligning LLMs to other forms of modalities, such as graphs [6,7]
- General exploration strategies in RL (irrespective of LLMs) [5,8,9]
- Injecting embeddings in LLMs (not necessarily for alignment) [1,2,10]
We expect these additions will help improve the clarity of the paper.
**References:** \
[1] Jeong, Jihwan, et al. "Factual and Personalized Recommendations using Language Models and Reinforcement Learning." 2023\
[2] Tennenholtz, Guy, et al. "Demystifying embedding spaces using large language models." 2023.\
[3] Roit, Paul, et al. "Factually Consistent Summarization via Reinforcement Learning with Textual Entailment Feedback." 2023.\
[4] Ziegler, Daniel M., et al. "Fine-tuning language models from human preferences." 2019.\
[5] Zhu, Yinglun, et al. "Contextual bandits with large action spaces: Made practical."2022.\
[6] Tang, Jiabin, et al. "Graphgpt: Graph instruction tuning for large language models." 2024.\
[7] Zhang, Mengmei, et al. "GraphTranslator: Aligning Graph Model to Large Language Model for Open-ended Tasks." 2024.\
[8] Pathak, Deepak, et al. "Curiosity-driven exploration by self-supervised prediction." International conference on machine 2017.\
[9] Zhou, Dongruo, Lihong Li, et al. "Neural contextual bandits with ucb-based exploration." 2020\
[10] Cao, Xinlei, et al. "Injecting user identity into pretrained language models for document-level sentiment classification." 2022
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses. Most aspects are clearer to me now, I'd appreciate to find it in the next version of the paper. My evaluation remains unchanged.
---
Reply to Comment 1.1.1:
Comment: We appreciate your helpful comments which will improve the quality of our paper. | Summary: This paper presents a method to steer an LLM’s generation towards optimal regions of a latent embedding space using reinforcement learning. The technique involves a language model to guide an LLM by modifying the textual representation of an entity. This work builds off previous work on embedding language models (ELM).
Strengths: This paper presents a novel technique and shows promising results. In particular, EAGLE represents a strong improvement over ELM, which is a recent proposed technique.
Weaknesses: * The experiments seem somewhat minimal
* The presentation is overly mathematical, and I found the appendix to be at times more informative than the main text. The mathematical language sometimes makes it more difficult to understand what experiments were actually done.
* The paper assumes knowledge of ELM (I had to read the ELM paper again in order to understand the details of EAGLE). The authors could make this slightly easier for the reader.
Technical Quality: 3
Clarity: 4
Questions for Authors: How should we interpret the result that the Distance Score is higher for ELM than for EAGLE in Table 2?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: * Given the nature of the work, it seems that this will be difficult to reproduce
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and your helpful comments. We appreciate you finding our paper novel and a strong improvement over ELM. Please find our response to your suggestions and comments below.
- We will add additional experiments using the public Amazon dataset, using user profiles and embeddings as defined in [1,2]. We will use this dataset to identify content gaps (i.e., new products) in product space. We believe the addition of these experiments will strengthen our work.
- We will improve the writing in the paper to (1) reduce the mathematical burden, and (2) add additional explanation regarding ELM to help the reader understand this paper without having to thoroughly read that work.
- As you correctly point out, ELM manages to achieve higher distance scores from the data. This result highlights the challenge of using EAGLE to move in embedding space, as it can only be controlled using language. Indeed, ELM can better control its movement in embedding space, potentially allowing one to move outside the existing corpus. However, this comes with several disadvantages, which we discuss thoroughly in Appendix B, including: realizability of the decoded output; non-linearity of the embedding manifold (i.e., it is unclear how to move on it, and which makes it hard to generalize too far from support); and the fact that the latent embedding may not encode enough semantic information for high-quality decoding. These challenges greatly reduce the ability of ELM to produce high-quality results, though it does manage to attain better distance scores.
**References:** \
[1] Jeong, Jihwan, et al. "Factual and Personalized Recommendations using Language Models and Reinforcement Learning." arXiv preprint arXiv:2310.06176 (2023). \
[2] Tennenholtz, Guy, et al. "Demystifying embedding spaces using large language models." arXiv preprint arXiv:2310.04475 (2023). | Summary: This work proposes training language models so that they follow objectives or utility functions which are defined in the embedding space. They define it as a reinforcement learning problem so that the EAGLE agent uses an actions prompt to probe the environment which is an LLM. The changed entity is embedded into the latent space where a reward is provided by the utility function. The experiments demonstrate better peformance compared to ELM. Further analysis shows interesting properties of environment transfer where the training and inference environments are different.
Strengths: 1) This method is interesting specially if combined with different kinds of utility functions which align with human preferences.
2) The method demonstrates strong experimental results albeit on a single dataset. The quality of the generation looks good.
Weaknesses: 1) The method relies on extensive prompt design specially for user profiles. How would this generalize to a new task. Can the prompt generation process be automated ?
2) The discussion on the computational complexity is not precise. The authors can pick a couple of scenarios and compare the computational complexity of ELM vs Eagle.
3) Experiments are conducted on a single dataset.
Technical Quality: 2
Clarity: 3
Questions for Authors: Are the EAGLE generations in the appendix cherry picked or randomly selected ?
Did the authors experiment with a less capable LLM. Is the quality of the generation just due to the LLM ?
Was any other utility function considered ?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and your helpful comments. Please find our response to your comments and suggestions below.
Re Prompt Generation:
1. Much recent work (see, e.g., [1,2,3,4]) has shown the importance of designing task-specific prompts for solving tasks. While our work aligns LLMs with embeddings, there is no reason not to also use such approaches to improve the quality of the task-specific results. Automating prompt generation is a very important and interesting problem, but is beyond the scope of our work (and can be viewed as orthogonal to our approach).
2. Personalization requires injecting user information into the LLM. This can be done using, say, user embeddings, through fine-tuning (see e.g., [5,9]), or using user profiles (see, e.g., [1,5,6]). We use user profiles here, as our method directly uses an environment LLM without any fine tuning, a benefit of our approach.
3. Much work has recently focused on the generation of large datasets of user profiles and personas, intended to be used in personalization. This challenge is evolving, independent of our work, and will benefit in reducing the difficulty of generating such profiles. See, e.g., [5,6,7,8].
Further Experiments: Thank you for pointing this out. We will add additional experiments using the public Amazon dataset, using user profiles and embeddings as defined in [5,10]. We will use this dataset to identify content gaps (i.e., new products) in product space. We believe the addition of these experiments will strengthen our work.
Questions:
- The results we show were not cherry picked, though we selected movies that are reasonably well-known.
- We conducted experiments using less capable LLMs. In particular, our agent LLM is Gemini Nano (a small model, with just over 2B parameters). Our environment LLM is Gemini Pro, whereas evaluation is conducted on Gemini Ultra (see table 3 in paper). Our results suggest that Gemini Pro training environment suffices to obtain high-quality inference results, when compared to training with Gemini Ultra, showing it did not improve overall performance.
- As you’ve suggested, we will add experiments using the Amazon dataset.
**References:** \
[1] Park, Joon Sung, et al. "Generative agents: Interactive simulacra of human behavior." Proceedings of the 36th annual acm symposium on user interface software and technology. 2023. \
[2] Kojima, Takeshi, et al. "Large language models are zero-shot reasoners." Advances in neural information processing systems 35 (2022): 22199-22213. \
[3] Wei, Jason, et al. "Chain-of-thought prompting elicits reasoning in large language models." Advances in neural information processing systems 35 (2022): 24824-24837. \
[4] Chen, Wenhu, et al. "Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks." arXiv preprint arXiv:2211.12588 (2022). \
[5] Jeong, Jihwan, et al. "Factual and Personalized Recommendations using Language Models and Reinforcement Learning." arXiv preprint arXiv:2310.06176 (2023). \
[6] Chan, Xin, et al. "Scaling Synthetic Data Creation with 1,000,000,000 Personas." arXiv preprint arXiv:2406.20094 (2024). \
[7] Shapira, Eilam, et al. "Can Large Language Models Replace Economic Choice Prediction Labs?." arXiv preprint arXiv:2401.17435 (2024). \
[8] Wu, Bin, et al. "Understanding the Role of User Profile in the Personalization of Large Language Models." arXiv preprint arXiv:2406.17803 (2024). \
[9] Cao, Xinlei, Jinyang Yu, and Yan Zhuang. "Injecting user identity into pretrained language models for document-level sentiment classification." IEEE Access 10 (2022): 30157-30167.\
[10] Tennenholtz, Guy, et al. "Demystifying embedding spaces using large language models." arXiv preprint arXiv:2310.04475 (2023).
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I think the additional experiment will be important to demonstrate the potential of this work. I have no further questions. I will keep my original evaluation. | Summary: This paper proposes an algorithm to train an LLM-based agent *EAGLE*, that can align itself with existing domain-specific *latent embedding spaces* (e.g. embedding vectors in recommender systems, personalized advertising, and content creation) to discover novel content gaps and recommend new entities. It defines the problem setup as finding the optimal entity that maximizes the objective *utility function* $U(z; D)$, which is defined as the sum of user's and creator's utility, and the distance from all existing entities.
It formulates the setup as a Reinforcement learning optimization problem, in which MDP's state space consists of all possible entities, action space is defined by LLM-generated language prompts personalized to each user, the transition function is change in entity space obtained from the environment LLM, and the reward function is the *utility function* at horizon *H* and 0 otherwise. It considers several reference policies such as 1. Uniform 2. Greedy next step 3. *G-optimal* to constrain and regularize the learning objective. The *EAGLE* algorithm hence consists of first generating the candidate actions, training the reference policy, and then training the final policy given reference policy, pre-trained environment LLM, encoder and reward function.
The empirical experiments are performed on the dataset **Movielens-25M**, and the latent embeddings are the behavioral embeddings obtained via *matrix factorization* using *alternative least squares* trained on the user ratings. 100 candidate actions (50 generic, 50 personalized) are generated by prompting focusing on diverse aspects such as plot changes, character enhancements, and storyline. The quality is evaluated using a pool of human annotators who score the utility w.r.t to the Movielens user profile and the rater's user profile and the distance of generated movie from its anchor. The final metric used for evaluation is defined as the fraction of raters who preferred the generated movie to the original for each of the defined quality metrics. The proposed method is compared against the baseline method ELM (Embedding language model). It is observed that EAGLE outperforms ELM significantly on user and rater utility, but slightly decreases distance score. It also seems to be less sensitive to environment LLM used and improves scores primarily for poorly rated anchor movies, with a small decrease in perfectly rated movies. G-optimal design also seems to be more helpful for EAGLE compared to the reference policy.
Strengths: - The novel formulation efficiently incorporates existing domain-specific latent embeddings and leverages the generative capabilites of the pretrained LLMs to surface content gaps. This can be very helpful in providing text-based personalized recommendations to users in real-world recommender systems.
- It is computationally cheaper and relatively more data-efficient to train compared to previous methods as it doesn't require learning an explicit decoder.
Weaknesses: Some of the key design choices made by the proposed algorithm raise concerns on generalizibility to other real-world systems.
- Generating the action space requires significant prompt engineering effort with detailed criteria and in-context examples for each entity and the action set needs to be personalized to each user, and having personalized actions seems to be critical to the performance of EAGLE (Table 4). This raises serious concerns since the quality of the recommendation can be heavily biased towards the subset of the subjective criteria provided in the prompt, which may not always be possible to exhaustively define for each entity.
- The coverage of the latent embedding space would also be severely bottlenecked by the sampled candidate actions which may only explore a tiny portion of the latent embedding space in practice. It isn't clear how a practitioner would know if the action set criteria is diverse enough and how large of a hyperparameter *K* would result in a coverage that is sufficient.
While some of these issues are discussed in the limitation section, the experiments fail to provide a realistic picture to the algorithm since the proposed algorithm has far more domain-specific information compared to the baseline through the language prompt and in-context examples. One way to possibly address that may be reporting the efficacy of the algorithm with a simplified chosen language prompt in a zero-shot setting and stripping away any stylistic recommendations from the domain-specific information provided in the personalized actions prompt. While it would understandably perform worse in that setting, it would help demonstrate what portion of the performance boost is coming from the algorithm's exploration vs the initial user-specified selections, which isn't possible to conclude from the experiments in the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is it possible to quantitatively estimate the coverage of the latent ambient space given the generated candidate action space? If not, could you give a description on how you would recommend a practitioner to iterate on the prompts and hyperparameters. Qualitatively comment on the difficulty of the process and any assumptions that you made along the way.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: - The authors have addressed the limitations of their algorithm and the broader societal impacts.
- It could be further improved by discussing the summary of the practicality and generalizability aspects of the different components, as described in the above sections, in a separate paragraph of the Limitation section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review and helpful feedback. We appreciate your positive assessment of our formulation and its strengths in surfacing content gaps using latent embeddings and the generative capabilities of LLMs. We address your concerns and questions below.
As you correctly point out, our method relies on a design of an action space. We emphasize several key points:
1. While our experiments demonstrate the benefit of personalized actions, EAGLE is agnostic to the use of personalized actions. Our framework aligns an LLM’s generation to an embedding space w.r.t. some predefined criterion. While we demonstrate this through personalized generation, other objectives can be formulated that do not require specific user personalization. We’ll demonstrate this by adding an experiment not involving personalization to the paper.
2. You point out an important and fundamental problem that is often overlooked in RL -- designing an efficient, useful, and sufficiently exploratory action space is crucial for any RL problem, particularly those involving language, where the action space is combinatorially large. Contemporary RL methods using LLMs implicitly define an action space induced by a fine-tuned SFT model [1,2,3]. This is similar to our choice of the best next-state action reference policy. Nevertheless, the feasible set of actions must be defined, regardless. This set can be defined using a dataset of predetermined examples (e.g., demonstrations of creative generation), or, synthetic generation of data. Notably, our method is agnostic to how the candidate set of actions is generated.
We are not aware of any available datasets for the task of creative generation, and therefore use synthetic generation of candidate actions.
3. The complete set of feasible actions is theoretically the set of all the possible utterances in the English language. This set of actions is not only too big, but also biased in terms of random exploration (which most RL algorithms use). To mitigate this bias, we leverage G-optimal design, which improves coverage within any given action set. That said, we acknowledge that this choice is a fundamental challenge for any problem involving RL, and particularly, RL with LLMs.
4. We present ELM as an alternative to showcase the trade-off between action space design and leveraging the expressive power of an environment LLM vs. directly decoding the embedding space (using ELM). While ELM avoids explicit action creation, it suffers from limitations in generalization and "out-of-manifold" issues. Our results highlight this trade-off, which we believe is a valuable contribution to the community. We discuss this trade-off and limitations thoroughly in Appendix B.
5. We fully agree that the limitations you raise in your review are valid. These arise in any approach involving RL with large action spaces, and particularly in language domains. One of our goals in this work is to emphasize these points to the research community, which explicitly highlights these challenges and offers potential solutions, such as G-optimal design, as a starting point for future research. As such we view these points as strengths of our paper.
Regarding Coverage: You correctly identify the challenge of estimating coverage of the latent embedding space. As you mention, we cannot exhaustively explore all possible actions. However:
1. G-optimal Design provides an exploration metric. Specifically, it provides a quantitative measure of how exploratory the action space is.
2. In our work we use a G-Optimal design that is constrained to the set of uniform policies over subsets of the action set. Nevertheless, one may learn a more general G-Optimal design (as defined by Definition 1, and studied in [4]), where an arbitrary distribution over the action set is learned. We did not find that this added complexity is needed in practice. Additionally, for selecting the number of actions K, one may use the exploration approximation constant C in Definition 1 to select an optimal value for K (which achieves highest coverage).
3. A fundamental challenge remains when using an LLM environment: even with G-optimal design, achieving complete coverage of the embedding space might be hard or even impossible. This is because the environment LLM's capabilities ultimately constrain the states we can explore in the embedding space. In other words, there may not exist actions that would move us in certain directions in the embedding space. This limitation is inherent to using LLMs, an important point we raise in the paper. We will emphasize this further.
4. Most LLM work relies solely on SFT (our variant of best next next-state action reference policy), implicitly limiting exploration to the SFT model's capabilities. We believe our explicit action space design and use of G-optimal design is a valuable step towards more robust and diverse exploration in LLMs.
We note that Appendix B emphasizes many of the limitations and tradeoffs of our algorithmic approach. We believe these key trade-offs should be viewed as strengths of our work, rather than weaknesses, as they raise fundamental challenges that, to date, have been overlooked by the research community, while providing a novel method for aligning language models to embedding spaces.
**References:**\
[1] Ziegler, Daniel M., et al. "Fine-tuning language models from human preferences." arXiv preprint arXiv:1909.08593 (2019). \
[2] Roit, Paul, et al. "Factually Consistent Summarization via Reinforcement Learning with Textual Entailment Feedback." Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2023. \
[3] Jeong, Jihwan, et al. "Factual and Personalized Recommendations using Language Models and Reinforcement Learning." arXiv preprint arXiv:2310.06176 (2023). \
[4] Zhu, Yinglun, et al. "Contextual bandits with large action spaces: Made practical." International Conference on Machine Learning. PMLR, 2022.
---
Rebuttal Comment 1.1:
Title: Updated review scores
Comment: Thanks for the clarifications on the concerns and questions. I have updated the review score to (5: Borderline accept) for the following reasons:
1. The authors promised to add a new experiment to demonstrate if EAGLE is agnostic to personalized actions. I have kept the score as borderline as while the analysis would be helpful regardless, the impact of personalization on performance is unclear at this point.
2. I am satisfied with their response regarding the limitations I mentioned being a fundamental challenge in any RL application with LLM and the importance of studying them anyway. But I'm still not convinced of the motivation behind G-optimal design specifically from the description and review responses and addressing the practicality concerns I raised in my review. I strongly encourage clarifying those two points as It would improve the quality of the paper further and is something easily possible to do in the final draft.
---
Reply to Comment 1.1.1:
Comment: We appreciate your response and updating your review score. Your suggestions will help improve the quality of our paper. Following up we are updating the paper to (1) include an additional experiment on the Amazon Public dataset, and (2) clarify the use of G-Optimal design more exhaustively. Particularly for (2), we will move more explanation from Appendix B and C to the main paper and further base the use of G-Optimal design on [4]. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.